Dec 03 00:05:58 crc systemd[1]: Starting Kubernetes Kubelet... Dec 03 00:05:58 crc restorecon[4711]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:58 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:05:59 crc restorecon[4711]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:05:59 crc restorecon[4711]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 03 00:05:59 crc kubenswrapper[4811]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:05:59 crc kubenswrapper[4811]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 00:05:59 crc kubenswrapper[4811]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:05:59 crc kubenswrapper[4811]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:05:59 crc kubenswrapper[4811]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 00:05:59 crc kubenswrapper[4811]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.965606 4811 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968336 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968356 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968360 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968364 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968369 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968374 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968380 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968385 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968391 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968396 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968402 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968407 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968414 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968420 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968426 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968430 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968435 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968439 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968443 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968447 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968454 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968459 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968464 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968468 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968473 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968478 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968481 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968486 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968489 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968493 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968498 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968501 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968505 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968508 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968512 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968515 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968519 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968522 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968526 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968529 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968533 4811 feature_gate.go:330] unrecognized feature gate: Example Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968536 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968540 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968544 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968548 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968551 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968556 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968559 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968563 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968566 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968570 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968573 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968577 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968580 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968583 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968586 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968591 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968595 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968599 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968602 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968606 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968610 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968614 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968617 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968620 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968624 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968627 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968631 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968634 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968637 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.968640 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968858 4811 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968873 4811 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968880 4811 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968886 4811 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968893 4811 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968897 4811 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968903 4811 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968909 4811 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968913 4811 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968917 4811 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968922 4811 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968926 4811 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968931 4811 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968934 4811 flags.go:64] FLAG: --cgroup-root="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968938 4811 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968942 4811 flags.go:64] FLAG: --client-ca-file="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968946 4811 flags.go:64] FLAG: --cloud-config="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968950 4811 flags.go:64] FLAG: --cloud-provider="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968954 4811 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968960 4811 flags.go:64] FLAG: --cluster-domain="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968964 4811 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968969 4811 flags.go:64] FLAG: --config-dir="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968973 4811 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968977 4811 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968983 4811 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968987 4811 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968991 4811 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.968996 4811 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969000 4811 flags.go:64] FLAG: --contention-profiling="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969004 4811 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969008 4811 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969012 4811 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969016 4811 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969021 4811 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969025 4811 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969029 4811 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969035 4811 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969040 4811 flags.go:64] FLAG: --enable-server="true" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969044 4811 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969049 4811 flags.go:64] FLAG: --event-burst="100" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969054 4811 flags.go:64] FLAG: --event-qps="50" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969058 4811 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969062 4811 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969066 4811 flags.go:64] FLAG: --eviction-hard="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969071 4811 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969075 4811 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969079 4811 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969083 4811 flags.go:64] FLAG: --eviction-soft="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969087 4811 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969091 4811 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969095 4811 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969098 4811 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969102 4811 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969106 4811 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969110 4811 flags.go:64] FLAG: --feature-gates="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969115 4811 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969119 4811 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969123 4811 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969127 4811 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969132 4811 flags.go:64] FLAG: --healthz-port="10248" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969136 4811 flags.go:64] FLAG: --help="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969140 4811 flags.go:64] FLAG: --hostname-override="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969143 4811 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969147 4811 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969151 4811 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969155 4811 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969159 4811 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969163 4811 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969167 4811 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969171 4811 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969175 4811 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969181 4811 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969185 4811 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969189 4811 flags.go:64] FLAG: --kube-reserved="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969194 4811 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969198 4811 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969202 4811 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969206 4811 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969210 4811 flags.go:64] FLAG: --lock-file="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969214 4811 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969218 4811 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969222 4811 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969228 4811 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969233 4811 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969237 4811 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969241 4811 flags.go:64] FLAG: --logging-format="text" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969245 4811 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969250 4811 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969270 4811 flags.go:64] FLAG: --manifest-url="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969274 4811 flags.go:64] FLAG: --manifest-url-header="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969279 4811 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969284 4811 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969288 4811 flags.go:64] FLAG: --max-pods="110" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969293 4811 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969297 4811 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969302 4811 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969305 4811 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969309 4811 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969313 4811 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969317 4811 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969328 4811 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969332 4811 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969336 4811 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969341 4811 flags.go:64] FLAG: --pod-cidr="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969345 4811 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969352 4811 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969356 4811 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969360 4811 flags.go:64] FLAG: --pods-per-core="0" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969364 4811 flags.go:64] FLAG: --port="10250" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969368 4811 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969372 4811 flags.go:64] FLAG: --provider-id="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969376 4811 flags.go:64] FLAG: --qos-reserved="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969380 4811 flags.go:64] FLAG: --read-only-port="10255" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969384 4811 flags.go:64] FLAG: --register-node="true" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969395 4811 flags.go:64] FLAG: --register-schedulable="true" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969399 4811 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969406 4811 flags.go:64] FLAG: --registry-burst="10" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969411 4811 flags.go:64] FLAG: --registry-qps="5" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969415 4811 flags.go:64] FLAG: --reserved-cpus="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969420 4811 flags.go:64] FLAG: --reserved-memory="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969426 4811 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969431 4811 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969436 4811 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969440 4811 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969445 4811 flags.go:64] FLAG: --runonce="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969449 4811 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969455 4811 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969460 4811 flags.go:64] FLAG: --seccomp-default="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969464 4811 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969469 4811 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969473 4811 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969478 4811 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969482 4811 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969486 4811 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969491 4811 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969495 4811 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969500 4811 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969504 4811 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969508 4811 flags.go:64] FLAG: --system-cgroups="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969512 4811 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969518 4811 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969522 4811 flags.go:64] FLAG: --tls-cert-file="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969526 4811 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969531 4811 flags.go:64] FLAG: --tls-min-version="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969535 4811 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969539 4811 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969543 4811 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969547 4811 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969551 4811 flags.go:64] FLAG: --v="2" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969559 4811 flags.go:64] FLAG: --version="false" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969564 4811 flags.go:64] FLAG: --vmodule="" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969569 4811 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.969574 4811 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969677 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969682 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969686 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969690 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969694 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969697 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969701 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969705 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969708 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969712 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969716 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969720 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969725 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969729 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969733 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969737 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969741 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969744 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969748 4811 feature_gate.go:330] unrecognized feature gate: Example Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969751 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969756 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969759 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969763 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969772 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969776 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969779 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969783 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969786 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969790 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969793 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969798 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969802 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969807 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969811 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969815 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969819 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969823 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969826 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969830 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969834 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969837 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969841 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969845 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969848 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969851 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969855 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969859 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969863 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969867 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969871 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969874 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969878 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969881 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969885 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969889 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969895 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969898 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969902 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969906 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969909 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969912 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969916 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969920 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969923 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969927 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969930 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969934 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969937 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969942 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969946 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.969949 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.970715 4811 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.981763 4811 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.981814 4811 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981900 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981908 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981913 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981919 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981923 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981926 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981932 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981938 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981942 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981947 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981951 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981955 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981959 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981963 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981966 4811 feature_gate.go:330] unrecognized feature gate: Example Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981970 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981974 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981977 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981981 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981985 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981988 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981992 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981995 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.981999 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982003 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982009 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982020 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982026 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982031 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982036 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982042 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982046 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982050 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982054 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982061 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982065 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982070 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982073 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982077 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982082 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982086 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982089 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982093 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982097 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982101 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982104 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982108 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982111 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982114 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982118 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982121 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982124 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982128 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982131 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982134 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982138 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982141 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982145 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982149 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982152 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982155 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982159 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982162 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982165 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982169 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982172 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982175 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982179 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982183 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982187 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.982190 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.982198 4811 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.984910 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.984988 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985003 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985015 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985028 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985038 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985048 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985058 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985068 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985078 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985088 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985098 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985109 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985121 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985132 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985145 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985155 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985165 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985175 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985185 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985201 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985214 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985226 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985237 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985248 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985293 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985304 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985313 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985323 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985333 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985345 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985359 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985370 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985383 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985392 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985403 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985413 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985425 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985438 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985450 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985462 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985476 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985486 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985496 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985506 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985514 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985523 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985532 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985541 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985552 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985563 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985572 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985580 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985588 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985597 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985605 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985613 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985621 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985629 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985638 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985646 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985653 4811 feature_gate.go:330] unrecognized feature gate: Example Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985662 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985670 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985680 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985692 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985702 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985713 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985721 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985731 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:05:59 crc kubenswrapper[4811]: W1203 00:05:59.985740 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.985754 4811 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.986057 4811 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.990394 4811 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.990580 4811 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.991486 4811 server.go:997] "Starting client certificate rotation" Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.991563 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.991784 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-22 10:16:51.411266616 +0000 UTC Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.992000 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 466h10m51.419274738s for next certificate rotation Dec 03 00:05:59 crc kubenswrapper[4811]: I1203 00:05:59.999031 4811 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.001577 4811 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.011702 4811 log.go:25] "Validated CRI v1 runtime API" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.024624 4811 log.go:25] "Validated CRI v1 image API" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.026779 4811 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.030008 4811 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-03-00-00-54-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.030063 4811 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.043365 4811 manager.go:217] Machine: {Timestamp:2025-12-03 00:06:00.042192039 +0000 UTC m=+0.184021541 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:304e3ae2-a71e-4783-94bd-e98dcbb7fc0a BootID:349eda2e-d94b-4951-8a31-6d5e4dd813eb Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:22:ae:d8 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:22:ae:d8 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:9c:f7:77 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e9:ba:c6 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:5f:66:93 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c6:e2:68 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:72:5f:4d:b5:32:47 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:96:53:6c:bd:f4:a2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.043614 4811 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.043752 4811 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.044046 4811 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.044201 4811 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.044242 4811 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.044498 4811 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.044508 4811 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.044750 4811 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.044795 4811 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.045154 4811 state_mem.go:36] "Initialized new in-memory state store" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.047535 4811 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.049308 4811 kubelet.go:418] "Attempting to sync node with API server" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.049361 4811 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.049391 4811 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.049411 4811 kubelet.go:324] "Adding apiserver pod source" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.049426 4811 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.051959 4811 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.052542 4811 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 00:06:00 crc kubenswrapper[4811]: W1203 00:06:00.052866 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.052975 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:00 crc kubenswrapper[4811]: W1203 00:06:00.052982 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.053101 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.054243 4811 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.055141 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.055193 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.055210 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.055225 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.055296 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.055314 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.055331 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.055358 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.055376 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.055393 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.055415 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.055430 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.055716 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.056539 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.056614 4811 server.go:1280] "Started kubelet" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.056901 4811 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.057001 4811 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.057544 4811 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.058799 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.058830 4811 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.059062 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 08:44:53.537367855 +0000 UTC Dec 03 00:06:00 crc systemd[1]: Started Kubernetes Kubelet. Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.059411 4811 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.060581 4811 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 03 00:06:00 crc kubenswrapper[4811]: W1203 00:06:00.060643 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.060700 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.060735 4811 server.go:460] "Adding debug handlers to kubelet server" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.059907 4811 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.060827 4811 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.060723 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="200ms" Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.058869 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.129:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187d8bd14d11fe51 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:06:00.056553041 +0000 UTC m=+0.198382563,LastTimestamp:2025-12-03 00:06:00.056553041 +0000 UTC m=+0.198382563,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.070257 4811 factory.go:55] Registering systemd factory Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.070383 4811 factory.go:221] Registration of the systemd container factory successfully Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.070871 4811 factory.go:153] Registering CRI-O factory Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.070924 4811 factory.go:221] Registration of the crio container factory successfully Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.071040 4811 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.071074 4811 factory.go:103] Registering Raw factory Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.071104 4811 manager.go:1196] Started watching for new ooms in manager Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.075392 4811 manager.go:319] Starting recovery of all containers Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.077280 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.077473 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.077592 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.077683 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.077765 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.077848 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.077951 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.078036 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.078133 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.078228 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.078348 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.078441 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.078527 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.078644 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.078758 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.078869 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.078959 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.079044 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.079130 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.079207 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.080100 4811 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.080230 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.080356 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.080440 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.080519 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.080600 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.080676 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.080768 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.080850 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.080929 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.081050 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.081132 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.081207 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.081414 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.081515 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.081593 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.081668 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.081781 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.081864 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.081941 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.082020 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.082100 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.082176 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.082288 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.082377 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.082460 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.082538 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.082673 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.082762 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.082840 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.082915 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.082991 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.083072 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.083157 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.083241 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.083345 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.083436 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.083548 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.083631 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.083716 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.083800 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.083879 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.083953 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.084027 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.084109 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.084187 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.084290 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.084385 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.084459 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.084533 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.084612 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.084685 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.084759 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.084833 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.084904 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.084988 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.085062 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.085140 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.085213 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.085319 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.085414 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.085494 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.085568 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.085642 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.085714 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.085791 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.085872 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.085948 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.086034 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.086108 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.086185 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.086290 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.086377 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.086482 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.086566 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.086642 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.086724 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.086801 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.086883 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.086963 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.087038 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.087111 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.087190 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.087290 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.087381 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.087466 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.087573 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.087658 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.087735 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.087820 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.087895 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.087994 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.088119 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.088207 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.088316 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.088401 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.088484 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.088565 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.088658 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.088741 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.088822 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.088945 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.089139 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.089253 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.089379 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.089505 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.089650 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.089744 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.089843 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.089922 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.090114 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.090231 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.090386 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.090496 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.090587 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.090672 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.090757 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.090836 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.090954 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.091086 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.091174 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.091386 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.091480 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.091608 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.091703 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.091780 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.091856 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.091929 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.092061 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.092175 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.092254 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.092357 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.092431 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.092502 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.092622 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.092733 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.092813 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.092920 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.092995 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.093070 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.093157 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.093279 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.093438 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.093089 4811 manager.go:324] Recovery completed Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.093516 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.093813 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.093963 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.094073 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.094183 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.094460 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.094586 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.094737 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.094865 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.094983 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.095155 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.095338 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.095533 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.095665 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.095810 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.095932 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.096059 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.096171 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.096369 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.096548 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.096659 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.096793 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098107 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098202 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098227 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098251 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098322 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098346 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098383 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098410 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098432 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098475 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098509 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098573 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098629 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098656 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098680 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098702 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098733 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098760 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098781 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098799 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098829 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098846 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098867 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098886 4811 reconstruct.go:97] "Volume reconstruction finished" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.098901 4811 reconciler.go:26] "Reconciler: start to sync state" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.106980 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.110179 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.110210 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.110219 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.111093 4811 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.111552 4811 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.111567 4811 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.111583 4811 state_mem.go:36] "Initialized new in-memory state store" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.113606 4811 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.113660 4811 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.113694 4811 kubelet.go:2335] "Starting kubelet main sync loop" Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.113758 4811 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 03 00:06:00 crc kubenswrapper[4811]: W1203 00:06:00.115565 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.115660 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.120115 4811 policy_none.go:49] "None policy: Start" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.121448 4811 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.121475 4811 state_mem.go:35] "Initializing new in-memory state store" Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.161060 4811 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.177538 4811 manager.go:334] "Starting Device Plugin manager" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.177751 4811 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.178102 4811 server.go:79] "Starting device plugin registration server" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.179806 4811 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.179864 4811 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.180232 4811 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.180409 4811 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.180418 4811 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.189341 4811 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.213968 4811 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.214105 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.215645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.215718 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.215732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.216067 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.216408 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.216522 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.217417 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.217474 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.217689 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.217883 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.217950 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.218030 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.218055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.218066 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.218119 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.218890 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.218915 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.218927 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.219042 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.219065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.219087 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.219079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.219302 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.219350 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.220065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.220225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.220304 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.220531 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.220644 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.220682 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.221169 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.221201 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.221216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.221908 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.221947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.221912 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.221963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.221984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.222060 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.222193 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.222228 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.223135 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.223167 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.223179 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.264819 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="400ms" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.280081 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.281579 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.281623 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.281636 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.281667 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.282179 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.129:6443: connect: connection refused" node="crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.302885 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.302957 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.302985 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.303016 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.303083 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.303130 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.303173 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.303457 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.303506 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.304086 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.304245 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.304356 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.304436 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.304543 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.304629 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.406834 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.406922 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.406947 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.406974 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407011 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407040 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407068 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407092 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407100 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407247 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407156 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407188 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407213 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407209 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407213 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407221 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407122 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407570 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407603 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407242 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407672 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407637 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407727 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407736 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407779 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407807 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407895 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407854 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407815 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.407860 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.482909 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.484415 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.484455 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.484468 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.484493 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.485117 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.129:6443: connect: connection refused" node="crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.552732 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.558880 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.581445 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: W1203 00:06:00.588494 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-60aca419b2762c2d5481a50c9aab7245abe06f85d10d653eee86902cfd8e4533 WatchSource:0}: Error finding container 60aca419b2762c2d5481a50c9aab7245abe06f85d10d653eee86902cfd8e4533: Status 404 returned error can't find the container with id 60aca419b2762c2d5481a50c9aab7245abe06f85d10d653eee86902cfd8e4533 Dec 03 00:06:00 crc kubenswrapper[4811]: W1203 00:06:00.593242 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-842bdd7dc93fa97f9dcdc041905f484fed2b654b9b2d7097ea11cff40139ffb6 WatchSource:0}: Error finding container 842bdd7dc93fa97f9dcdc041905f484fed2b654b9b2d7097ea11cff40139ffb6: Status 404 returned error can't find the container with id 842bdd7dc93fa97f9dcdc041905f484fed2b654b9b2d7097ea11cff40139ffb6 Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.603575 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.610425 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:00 crc kubenswrapper[4811]: W1203 00:06:00.631481 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-8d6e5a0a15462f5c4467e778293adeb92e51e671d0c37b6ebab09a3f2bbcaf29 WatchSource:0}: Error finding container 8d6e5a0a15462f5c4467e778293adeb92e51e671d0c37b6ebab09a3f2bbcaf29: Status 404 returned error can't find the container with id 8d6e5a0a15462f5c4467e778293adeb92e51e671d0c37b6ebab09a3f2bbcaf29 Dec 03 00:06:00 crc kubenswrapper[4811]: W1203 00:06:00.633786 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-055a691c9bd573068ca7368294f0da6fe42adae5377e02bd4917ea98de01a36d WatchSource:0}: Error finding container 055a691c9bd573068ca7368294f0da6fe42adae5377e02bd4917ea98de01a36d: Status 404 returned error can't find the container with id 055a691c9bd573068ca7368294f0da6fe42adae5377e02bd4917ea98de01a36d Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.666424 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="800ms" Dec 03 00:06:00 crc kubenswrapper[4811]: W1203 00:06:00.868058 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.868160 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.885590 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.886817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.886858 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.886870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:00 crc kubenswrapper[4811]: I1203 00:06:00.886895 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 00:06:00 crc kubenswrapper[4811]: E1203 00:06:00.887371 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.129:6443: connect: connection refused" node="crc" Dec 03 00:06:01 crc kubenswrapper[4811]: W1203 00:06:01.054873 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 00:06:01 crc kubenswrapper[4811]: E1203 00:06:01.055408 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.057888 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.060113 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:32:15.376199234 +0000 UTC Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.060158 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 161h26m14.316043493s for next certificate rotation Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.120009 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50"} Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.120132 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8d6e5a0a15462f5c4467e778293adeb92e51e671d0c37b6ebab09a3f2bbcaf29"} Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.122586 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390" exitCode=0 Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.122673 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390"} Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.122725 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"82047f123df7df84ee4d443f80c83c8e6b4c12e6c8c85f530bd26ddde0ed590c"} Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.122829 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.124835 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.124870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.124881 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.125699 4811 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1" exitCode=0 Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.125758 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1"} Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.125833 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"842bdd7dc93fa97f9dcdc041905f484fed2b654b9b2d7097ea11cff40139ffb6"} Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.125988 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.126427 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.127015 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.127061 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.127074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.127364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.127407 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.127425 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.127703 4811 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564" exitCode=0 Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.127754 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564"} Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.127775 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"60aca419b2762c2d5481a50c9aab7245abe06f85d10d653eee86902cfd8e4533"} Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.127837 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.128517 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.128550 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.128559 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.129697 4811 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92" exitCode=0 Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.129732 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92"} Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.129761 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"055a691c9bd573068ca7368294f0da6fe42adae5377e02bd4917ea98de01a36d"} Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.129846 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.130662 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.130690 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.130699 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:01 crc kubenswrapper[4811]: W1203 00:06:01.148053 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 00:06:01 crc kubenswrapper[4811]: E1203 00:06:01.148164 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:01 crc kubenswrapper[4811]: E1203 00:06:01.468231 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="1.6s" Dec 03 00:06:01 crc kubenswrapper[4811]: W1203 00:06:01.512613 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.129:6443: connect: connection refused Dec 03 00:06:01 crc kubenswrapper[4811]: E1203 00:06:01.512697 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.129:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.688026 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.689131 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.689168 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.689182 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:01 crc kubenswrapper[4811]: I1203 00:06:01.689205 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 00:06:01 crc kubenswrapper[4811]: E1203 00:06:01.689702 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.129:6443: connect: connection refused" node="crc" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.136854 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c2c6f29b65991f85990666d9b5ac4d86ba58d8248ab611dc49bd6ea44808a4fa"} Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.136980 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.138183 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.138212 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.138222 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.140777 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"265e4edcc98daf63d66695692e65ca0749f6383ff716dc04b1e4f283d437f640"} Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.140805 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"662be78be83c4fc0261e0810b70e37365749e1ef960db2bf94ec025e90ca96fd"} Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.140818 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a2631b0da901ad6d3813ac0e4eefb7ddb376e9bca75fb6737cc154e9336bea38"} Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.140884 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.141609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.141632 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.141649 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.145762 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da"} Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.145844 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc"} Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.145859 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e"} Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.145785 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.152287 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.152317 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.152330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.154710 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9"} Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.154763 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d"} Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.154779 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3"} Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.154791 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710"} Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.156570 4811 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1" exitCode=0 Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.156605 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1"} Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.156707 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.158670 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.158721 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:02 crc kubenswrapper[4811]: I1203 00:06:02.158732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.171394 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240"} Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.171462 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.173439 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.173495 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.173518 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.176639 4811 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b" exitCode=0 Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.176751 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b"} Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.176868 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.177013 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.178460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.178608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.178682 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.178480 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.178821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.178842 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.290434 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.292132 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.292195 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.292220 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:03 crc kubenswrapper[4811]: I1203 00:06:03.292294 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 00:06:04 crc kubenswrapper[4811]: I1203 00:06:04.187059 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 00:06:04 crc kubenswrapper[4811]: I1203 00:06:04.187141 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:04 crc kubenswrapper[4811]: I1203 00:06:04.187228 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d"} Dec 03 00:06:04 crc kubenswrapper[4811]: I1203 00:06:04.187338 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452"} Dec 03 00:06:04 crc kubenswrapper[4811]: I1203 00:06:04.187368 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001"} Dec 03 00:06:04 crc kubenswrapper[4811]: I1203 00:06:04.187386 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d"} Dec 03 00:06:04 crc kubenswrapper[4811]: I1203 00:06:04.188519 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:04 crc kubenswrapper[4811]: I1203 00:06:04.188578 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:04 crc kubenswrapper[4811]: I1203 00:06:04.188600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:05 crc kubenswrapper[4811]: I1203 00:06:05.082499 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:05 crc kubenswrapper[4811]: I1203 00:06:05.193256 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44"} Dec 03 00:06:05 crc kubenswrapper[4811]: I1203 00:06:05.193365 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:05 crc kubenswrapper[4811]: I1203 00:06:05.193493 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:05 crc kubenswrapper[4811]: I1203 00:06:05.194162 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:05 crc kubenswrapper[4811]: I1203 00:06:05.194206 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:05 crc kubenswrapper[4811]: I1203 00:06:05.194218 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:05 crc kubenswrapper[4811]: I1203 00:06:05.197404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:05 crc kubenswrapper[4811]: I1203 00:06:05.197468 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:05 crc kubenswrapper[4811]: I1203 00:06:05.197488 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:05 crc kubenswrapper[4811]: I1203 00:06:05.512871 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:06 crc kubenswrapper[4811]: I1203 00:06:06.195998 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:06 crc kubenswrapper[4811]: I1203 00:06:06.196132 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:06 crc kubenswrapper[4811]: I1203 00:06:06.197078 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:06 crc kubenswrapper[4811]: I1203 00:06:06.197107 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:06 crc kubenswrapper[4811]: I1203 00:06:06.197116 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:06 crc kubenswrapper[4811]: I1203 00:06:06.197511 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:06 crc kubenswrapper[4811]: I1203 00:06:06.197545 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:06 crc kubenswrapper[4811]: I1203 00:06:06.197556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:06 crc kubenswrapper[4811]: I1203 00:06:06.335982 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:06 crc kubenswrapper[4811]: I1203 00:06:06.336207 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:06 crc kubenswrapper[4811]: I1203 00:06:06.337898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:06 crc kubenswrapper[4811]: I1203 00:06:06.337958 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:06 crc kubenswrapper[4811]: I1203 00:06:06.337981 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.123781 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.124020 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.125285 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.125333 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.125345 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.328020 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.328228 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.329594 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.329632 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.329644 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.336152 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.702182 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.702482 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.704218 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.704439 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.704546 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:07 crc kubenswrapper[4811]: I1203 00:06:07.967030 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:08 crc kubenswrapper[4811]: I1203 00:06:08.202151 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:08 crc kubenswrapper[4811]: I1203 00:06:08.203744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:08 crc kubenswrapper[4811]: I1203 00:06:08.203809 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:08 crc kubenswrapper[4811]: I1203 00:06:08.203828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:08 crc kubenswrapper[4811]: I1203 00:06:08.268876 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 03 00:06:08 crc kubenswrapper[4811]: I1203 00:06:08.269082 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:08 crc kubenswrapper[4811]: I1203 00:06:08.270539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:08 crc kubenswrapper[4811]: I1203 00:06:08.270571 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:08 crc kubenswrapper[4811]: I1203 00:06:08.270582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:08 crc kubenswrapper[4811]: I1203 00:06:08.807212 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:09 crc kubenswrapper[4811]: I1203 00:06:09.205214 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:09 crc kubenswrapper[4811]: I1203 00:06:09.206608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:09 crc kubenswrapper[4811]: I1203 00:06:09.206655 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:09 crc kubenswrapper[4811]: I1203 00:06:09.206668 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:10 crc kubenswrapper[4811]: E1203 00:06:10.190675 4811 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:06:10 crc kubenswrapper[4811]: I1203 00:06:10.207860 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:10 crc kubenswrapper[4811]: I1203 00:06:10.209079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:10 crc kubenswrapper[4811]: I1203 00:06:10.209249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:10 crc kubenswrapper[4811]: I1203 00:06:10.209368 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:10 crc kubenswrapper[4811]: I1203 00:06:10.212982 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:11 crc kubenswrapper[4811]: I1203 00:06:11.210838 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:11 crc kubenswrapper[4811]: I1203 00:06:11.213634 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:11 crc kubenswrapper[4811]: I1203 00:06:11.213824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:11 crc kubenswrapper[4811]: I1203 00:06:11.213992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:11 crc kubenswrapper[4811]: I1203 00:06:11.808029 4811 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:06:11 crc kubenswrapper[4811]: I1203 00:06:11.808545 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 00:06:12 crc kubenswrapper[4811]: I1203 00:06:12.058512 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 03 00:06:12 crc kubenswrapper[4811]: I1203 00:06:12.864399 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 03 00:06:12 crc kubenswrapper[4811]: I1203 00:06:12.864611 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:12 crc kubenswrapper[4811]: I1203 00:06:12.865989 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:12 crc kubenswrapper[4811]: I1203 00:06:12.866060 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:12 crc kubenswrapper[4811]: I1203 00:06:12.866070 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:12 crc kubenswrapper[4811]: I1203 00:06:12.898993 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 03 00:06:12 crc kubenswrapper[4811]: I1203 00:06:12.899065 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 00:06:12 crc kubenswrapper[4811]: I1203 00:06:12.904636 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 03 00:06:12 crc kubenswrapper[4811]: I1203 00:06:12.904718 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 00:06:15 crc kubenswrapper[4811]: I1203 00:06:15.083354 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 03 00:06:15 crc kubenswrapper[4811]: I1203 00:06:15.083422 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 03 00:06:15 crc kubenswrapper[4811]: I1203 00:06:15.517441 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:15 crc kubenswrapper[4811]: I1203 00:06:15.517583 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:15 crc kubenswrapper[4811]: I1203 00:06:15.517919 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 03 00:06:15 crc kubenswrapper[4811]: I1203 00:06:15.517990 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 03 00:06:15 crc kubenswrapper[4811]: I1203 00:06:15.518555 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:15 crc kubenswrapper[4811]: I1203 00:06:15.518603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:15 crc kubenswrapper[4811]: I1203 00:06:15.518618 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:15 crc kubenswrapper[4811]: I1203 00:06:15.521775 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:16 crc kubenswrapper[4811]: I1203 00:06:16.091762 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 03 00:06:16 crc kubenswrapper[4811]: I1203 00:06:16.091835 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 03 00:06:16 crc kubenswrapper[4811]: I1203 00:06:16.223470 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:16 crc kubenswrapper[4811]: I1203 00:06:16.224137 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 03 00:06:16 crc kubenswrapper[4811]: I1203 00:06:16.224219 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 03 00:06:16 crc kubenswrapper[4811]: I1203 00:06:16.225054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:16 crc kubenswrapper[4811]: I1203 00:06:16.225151 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:16 crc kubenswrapper[4811]: I1203 00:06:16.225232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:17 crc kubenswrapper[4811]: E1203 00:06:17.888507 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 03 00:06:17 crc kubenswrapper[4811]: I1203 00:06:17.889803 4811 trace.go:236] Trace[40054794]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 00:06:04.275) (total time: 13613ms): Dec 03 00:06:17 crc kubenswrapper[4811]: Trace[40054794]: ---"Objects listed" error: 13613ms (00:06:17.889) Dec 03 00:06:17 crc kubenswrapper[4811]: Trace[40054794]: [13.613758324s] [13.613758324s] END Dec 03 00:06:17 crc kubenswrapper[4811]: I1203 00:06:17.889840 4811 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 00:06:17 crc kubenswrapper[4811]: I1203 00:06:17.890061 4811 trace.go:236] Trace[346801850]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 00:06:04.155) (total time: 13734ms): Dec 03 00:06:17 crc kubenswrapper[4811]: Trace[346801850]: ---"Objects listed" error: 13734ms (00:06:17.890) Dec 03 00:06:17 crc kubenswrapper[4811]: Trace[346801850]: [13.734090948s] [13.734090948s] END Dec 03 00:06:17 crc kubenswrapper[4811]: I1203 00:06:17.890079 4811 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 00:06:17 crc kubenswrapper[4811]: I1203 00:06:17.890942 4811 trace.go:236] Trace[1865303324]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 00:06:03.879) (total time: 14011ms): Dec 03 00:06:17 crc kubenswrapper[4811]: Trace[1865303324]: ---"Objects listed" error: 14011ms (00:06:17.890) Dec 03 00:06:17 crc kubenswrapper[4811]: Trace[1865303324]: [14.011595228s] [14.011595228s] END Dec 03 00:06:17 crc kubenswrapper[4811]: I1203 00:06:17.890959 4811 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 00:06:17 crc kubenswrapper[4811]: E1203 00:06:17.891213 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 03 00:06:17 crc kubenswrapper[4811]: I1203 00:06:17.891730 4811 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 03 00:06:17 crc kubenswrapper[4811]: I1203 00:06:17.893509 4811 trace.go:236] Trace[559881876]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 00:06:03.998) (total time: 13895ms): Dec 03 00:06:17 crc kubenswrapper[4811]: Trace[559881876]: ---"Objects listed" error: 13895ms (00:06:17.893) Dec 03 00:06:17 crc kubenswrapper[4811]: Trace[559881876]: [13.895149859s] [13.895149859s] END Dec 03 00:06:17 crc kubenswrapper[4811]: I1203 00:06:17.893530 4811 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.059710 4811 apiserver.go:52] "Watching apiserver" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.063011 4811 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.063236 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.063614 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.063658 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.063757 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.064061 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.064072 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.064122 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.064159 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.064176 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.064147 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.067162 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.067200 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.067868 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.068078 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.068367 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.068527 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.068636 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.068778 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.070867 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.101781 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.109077 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-fl6vq"] Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.109688 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fl6vq" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.112316 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.112316 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.112574 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.124060 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.136039 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.147335 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.158197 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.161974 4811 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.169694 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.181870 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.193551 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.193612 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.193638 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.193662 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.193689 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.193715 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.193741 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.193764 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.193788 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.193808 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.193829 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.193894 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.193918 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.193949 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.193980 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194003 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194030 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194055 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194077 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194096 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194123 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194192 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194212 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194228 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194247 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194278 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194295 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194310 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194352 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194369 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194386 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194435 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194457 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194475 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194491 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194510 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194518 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194552 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194571 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194614 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194632 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194665 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194681 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194711 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194741 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194756 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194772 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194804 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194821 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194839 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194876 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194906 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194921 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194938 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194957 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194974 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195002 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195132 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195149 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195168 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195273 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195294 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195330 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195360 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195376 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195394 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195410 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195425 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195442 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195482 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195500 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195533 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195586 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195698 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195729 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195798 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195815 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195830 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195845 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195862 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195880 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195914 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195933 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195949 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195966 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195982 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196013 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196028 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196056 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196164 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196186 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196203 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196236 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196684 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196708 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196747 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196765 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196803 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196843 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196862 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196884 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196908 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196927 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196952 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196975 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197208 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197276 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197295 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197393 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197440 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197484 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197530 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197548 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197569 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197588 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197607 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197624 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197640 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197659 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197679 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197700 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197719 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197765 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197782 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197800 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197816 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197834 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197885 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197903 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197923 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197941 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197986 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198004 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198021 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198037 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198052 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198069 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198102 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198120 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198136 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198159 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198175 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198191 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198206 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198222 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198245 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198479 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198501 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198518 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198550 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198567 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198585 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198605 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198623 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198640 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198659 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198678 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198697 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198715 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198732 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198750 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198768 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198786 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198808 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198832 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198851 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198876 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198900 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198918 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198940 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198958 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198977 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198998 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199016 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199033 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199051 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199069 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199087 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199105 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199125 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199143 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199160 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199178 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199196 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199214 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199340 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199359 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199379 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199396 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199413 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199431 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199450 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199470 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199486 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199503 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199519 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199537 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199578 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199603 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199625 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199645 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199671 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f2h8\" (UniqueName: \"kubernetes.io/projected/6cce253a-e326-4d5e-9cf8-3dff3e77fcf7-kube-api-access-6f2h8\") pod \"node-resolver-fl6vq\" (UID: \"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\") " pod="openshift-dns/node-resolver-fl6vq" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199689 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199725 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6cce253a-e326-4d5e-9cf8-3dff3e77fcf7-hosts-file\") pod \"node-resolver-fl6vq\" (UID: \"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\") " pod="openshift-dns/node-resolver-fl6vq" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199744 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199782 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199811 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199851 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199874 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199895 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199911 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199935 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199957 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.200009 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.200021 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194801 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.194999 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195256 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195393 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.195668 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.200751 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196162 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196725 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196760 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196801 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196840 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.196836 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197011 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197531 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197581 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197661 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197824 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197800 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197869 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.197877 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198309 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198381 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198461 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198742 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198787 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198816 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.198960 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199036 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199035 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199090 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199123 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199389 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199401 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199533 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199844 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.199890 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.200098 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.200962 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.200338 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.200385 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.200643 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.201104 4811 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.201220 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.201315 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.201438 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.201568 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.201909 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.202119 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.202340 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.202691 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.202762 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.202786 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.202993 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.203228 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.203301 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.203485 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.203586 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.203731 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.203908 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.203969 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.204102 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.204327 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.204355 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.204615 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.204677 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.205191 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.205550 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.205600 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.206178 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.206550 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.206566 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.206873 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.206872 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.207174 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.207290 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.207340 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.207552 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.208941 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.209408 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.209606 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.209625 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.209946 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.210187 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.213218 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.213394 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.213549 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.213974 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.214562 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.214702 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.214996 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.215029 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.215324 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.215456 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.215553 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.215734 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.215794 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.222776 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.223725 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.223743 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.223977 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.224247 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.224499 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.224881 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.224945 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.225146 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.225303 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.225389 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.225490 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.225783 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.226507 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.226568 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.226590 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.226700 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.227089 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.227301 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.227370 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.228899 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.229122 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.229674 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.230079 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.230636 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.230956 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.231135 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.231651 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.231716 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.232336 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.232450 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.232786 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.232849 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.233085 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.233411 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.234109 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.234498 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.234598 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:18.734570823 +0000 UTC m=+18.876400295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.234653 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.234689 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:18.734682236 +0000 UTC m=+18.876511698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.235177 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.240751 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.242319 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.242655 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.242859 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.243061 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.243987 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.244212 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.245883 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.246382 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.246547 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.247191 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.247038 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.247308 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.247403 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.248794 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.250008 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.250296 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.250340 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.250453 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.250700 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.250783 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.250790 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.251107 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.251201 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.251242 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.251332 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.251537 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.251578 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.251611 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.251636 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.246409 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.251795 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:18.751750908 +0000 UTC m=+18.893580400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.252218 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.245737 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.252386 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.252723 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.253121 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.253148 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:06:18.752844725 +0000 UTC m=+18.894674197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.253254 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.253355 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.253654 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.254032 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.253685 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.253618 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.253794 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.253864 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.254307 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.254388 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.254484 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.255068 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.255480 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.255501 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.256454 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.256783 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.256877 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.257039 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.258488 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.259567 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.259785 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.259850 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.259986 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.260020 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.260239 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.260123 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.260464 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.260576 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.260704 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:18.760685058 +0000 UTC m=+18.902514530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.258712 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.279344 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.281467 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.286759 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.292504 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308063 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f2h8\" (UniqueName: \"kubernetes.io/projected/6cce253a-e326-4d5e-9cf8-3dff3e77fcf7-kube-api-access-6f2h8\") pod \"node-resolver-fl6vq\" (UID: \"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\") " pod="openshift-dns/node-resolver-fl6vq" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308114 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6cce253a-e326-4d5e-9cf8-3dff3e77fcf7-hosts-file\") pod \"node-resolver-fl6vq\" (UID: \"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\") " pod="openshift-dns/node-resolver-fl6vq" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308144 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308173 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308227 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308238 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308246 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308254 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308282 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308290 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308298 4811 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308305 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308314 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308322 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308331 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308339 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308347 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308355 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308363 4811 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308371 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308383 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308408 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308418 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308426 4811 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308435 4811 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308443 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308450 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308459 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308467 4811 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308475 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308511 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308520 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308528 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308536 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308544 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308553 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308561 4811 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308569 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308579 4811 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308587 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308596 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308604 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308613 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308621 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308629 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308638 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308647 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308656 4811 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308664 4811 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308674 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308682 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308691 4811 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308699 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308707 4811 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308716 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308724 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308731 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308740 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308750 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308757 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308765 4811 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308773 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308781 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308797 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308805 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308814 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308822 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308829 4811 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308838 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308846 4811 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308854 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308862 4811 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308872 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308892 4811 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308904 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308915 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308925 4811 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308934 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308943 4811 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308951 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308960 4811 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308968 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308976 4811 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308986 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.308995 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309004 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309011 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309020 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309029 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309040 4811 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309048 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309057 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309065 4811 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309073 4811 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309083 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309095 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309104 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309113 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309122 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309130 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309139 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309148 4811 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309160 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309171 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309182 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309194 4811 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309204 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309213 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309223 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309233 4811 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309240 4811 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309249 4811 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309272 4811 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309281 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309290 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309308 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309316 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309324 4811 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309332 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309339 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309347 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309423 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6cce253a-e326-4d5e-9cf8-3dff3e77fcf7-hosts-file\") pod \"node-resolver-fl6vq\" (UID: \"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\") " pod="openshift-dns/node-resolver-fl6vq" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309500 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309571 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309829 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309853 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309865 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309873 4811 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309881 4811 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309880 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309890 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309902 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309911 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309922 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309931 4811 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309940 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309948 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309956 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309964 4811 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309972 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309982 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309991 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.309999 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310022 4811 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310030 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310038 4811 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310047 4811 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310054 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310062 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310071 4811 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310079 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310087 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310096 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310104 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310112 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310121 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310129 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310138 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310148 4811 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310158 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310168 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310178 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310188 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310196 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310204 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310218 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310227 4811 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310235 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310243 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310251 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310275 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310284 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310291 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310299 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310307 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310315 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310323 4811 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310331 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310339 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310349 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310356 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310364 4811 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310372 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310381 4811 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310389 4811 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310397 4811 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310405 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310412 4811 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310420 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310428 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310438 4811 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310446 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310454 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310476 4811 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.310484 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.323170 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.323752 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.331530 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.343921 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.356858 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.361506 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f2h8\" (UniqueName: \"kubernetes.io/projected/6cce253a-e326-4d5e-9cf8-3dff3e77fcf7-kube-api-access-6f2h8\") pod \"node-resolver-fl6vq\" (UID: \"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\") " pod="openshift-dns/node-resolver-fl6vq" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.365822 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.368531 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.375854 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.380417 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.388024 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.392055 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.392184 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.400373 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.401786 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.413856 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.413877 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.413886 4811 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.413896 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.413904 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.423940 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fl6vq" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.509442 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-c998b"] Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.510493 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-bc7p2"] Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.510802 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.511179 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.515836 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.516002 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.516019 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.516291 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.516582 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.516701 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.517527 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-56rjt"] Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.517803 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.517955 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.518636 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.518840 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.519157 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.521529 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.521778 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.533894 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.544885 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.559142 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.573679 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.586439 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.603173 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.615912 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-system-cni-dir\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.615965 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-run-netns\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.615980 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-hostroot\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616013 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00463350-e27b-4e14-acee-d79ff4d8eda3-rootfs\") pod \"machine-config-daemon-bc7p2\" (UID: \"00463350-e27b-4e14-acee-d79ff4d8eda3\") " pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616034 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps4ng\" (UniqueName: \"kubernetes.io/projected/00463350-e27b-4e14-acee-d79ff4d8eda3-kube-api-access-ps4ng\") pod \"machine-config-daemon-bc7p2\" (UID: \"00463350-e27b-4e14-acee-d79ff4d8eda3\") " pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616054 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/06cb0758-b33b-4730-a341-cc78a072aa5f-multus-daemon-config\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616101 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-os-release\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616119 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0dbb952e-adc7-460c-994c-5620183fe85f-system-cni-dir\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616133 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0dbb952e-adc7-460c-994c-5620183fe85f-cnibin\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616149 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00463350-e27b-4e14-acee-d79ff4d8eda3-mcd-auth-proxy-config\") pod \"machine-config-daemon-bc7p2\" (UID: \"00463350-e27b-4e14-acee-d79ff4d8eda3\") " pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616179 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/06cb0758-b33b-4730-a341-cc78a072aa5f-cni-binary-copy\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616197 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-multus-socket-dir-parent\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616215 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-multus-conf-dir\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616271 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-var-lib-kubelet\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616298 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-run-multus-certs\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616385 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0dbb952e-adc7-460c-994c-5620183fe85f-os-release\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616444 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0dbb952e-adc7-460c-994c-5620183fe85f-cni-binary-copy\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616470 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0dbb952e-adc7-460c-994c-5620183fe85f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616549 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjmz2\" (UniqueName: \"kubernetes.io/projected/0dbb952e-adc7-460c-994c-5620183fe85f-kube-api-access-kjmz2\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616602 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00463350-e27b-4e14-acee-d79ff4d8eda3-proxy-tls\") pod \"machine-config-daemon-bc7p2\" (UID: \"00463350-e27b-4e14-acee-d79ff4d8eda3\") " pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616715 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-run-k8s-cni-cncf-io\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616780 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-var-lib-cni-bin\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616799 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-var-lib-cni-multus\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616817 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-etc-kubernetes\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616834 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5dzt\" (UniqueName: \"kubernetes.io/projected/06cb0758-b33b-4730-a341-cc78a072aa5f-kube-api-access-l5dzt\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616853 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0dbb952e-adc7-460c-994c-5620183fe85f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616878 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-cnibin\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.616905 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-multus-cni-dir\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.619876 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.632635 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.642298 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.657948 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.669223 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.680655 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.711868 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717378 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-multus-socket-dir-parent\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717438 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-multus-conf-dir\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717456 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-var-lib-kubelet\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717501 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-run-multus-certs\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717522 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0dbb952e-adc7-460c-994c-5620183fe85f-os-release\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717544 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0dbb952e-adc7-460c-994c-5620183fe85f-cni-binary-copy\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717577 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0dbb952e-adc7-460c-994c-5620183fe85f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717599 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjmz2\" (UniqueName: \"kubernetes.io/projected/0dbb952e-adc7-460c-994c-5620183fe85f-kube-api-access-kjmz2\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717615 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-var-lib-cni-bin\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717615 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-var-lib-kubelet\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717647 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-var-lib-cni-multus\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717663 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00463350-e27b-4e14-acee-d79ff4d8eda3-proxy-tls\") pod \"machine-config-daemon-bc7p2\" (UID: \"00463350-e27b-4e14-acee-d79ff4d8eda3\") " pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717681 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-run-k8s-cni-cncf-io\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717697 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-etc-kubernetes\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717728 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5dzt\" (UniqueName: \"kubernetes.io/projected/06cb0758-b33b-4730-a341-cc78a072aa5f-kube-api-access-l5dzt\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717743 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0dbb952e-adc7-460c-994c-5620183fe85f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717756 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-cnibin\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717773 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-multus-cni-dir\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717806 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-multus-socket-dir-parent\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717811 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-system-cni-dir\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717827 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-run-netns\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717838 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-multus-conf-dir\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717841 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-hostroot\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717876 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00463350-e27b-4e14-acee-d79ff4d8eda3-rootfs\") pod \"machine-config-daemon-bc7p2\" (UID: \"00463350-e27b-4e14-acee-d79ff4d8eda3\") " pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717897 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-run-multus-certs\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717901 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps4ng\" (UniqueName: \"kubernetes.io/projected/00463350-e27b-4e14-acee-d79ff4d8eda3-kube-api-access-ps4ng\") pod \"machine-config-daemon-bc7p2\" (UID: \"00463350-e27b-4e14-acee-d79ff4d8eda3\") " pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717923 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/06cb0758-b33b-4730-a341-cc78a072aa5f-multus-daemon-config\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717947 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0dbb952e-adc7-460c-994c-5620183fe85f-cnibin\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717983 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-os-release\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.718006 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0dbb952e-adc7-460c-994c-5620183fe85f-system-cni-dir\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.718029 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00463350-e27b-4e14-acee-d79ff4d8eda3-mcd-auth-proxy-config\") pod \"machine-config-daemon-bc7p2\" (UID: \"00463350-e27b-4e14-acee-d79ff4d8eda3\") " pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.718049 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/06cb0758-b33b-4730-a341-cc78a072aa5f-cni-binary-copy\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.718113 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0dbb952e-adc7-460c-994c-5620183fe85f-os-release\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.718761 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-var-lib-cni-bin\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.718816 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-run-k8s-cni-cncf-io\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.718838 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-var-lib-cni-multus\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.718962 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0dbb952e-adc7-460c-994c-5620183fe85f-system-cni-dir\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.718968 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0dbb952e-adc7-460c-994c-5620183fe85f-cnibin\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.719048 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-os-release\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.719099 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-system-cni-dir\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.719114 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/06cb0758-b33b-4730-a341-cc78a072aa5f-multus-daemon-config\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.719133 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00463350-e27b-4e14-acee-d79ff4d8eda3-rootfs\") pod \"machine-config-daemon-bc7p2\" (UID: \"00463350-e27b-4e14-acee-d79ff4d8eda3\") " pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.717881 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-hostroot\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.719196 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0dbb952e-adc7-460c-994c-5620183fe85f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.719207 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0dbb952e-adc7-460c-994c-5620183fe85f-cni-binary-copy\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.719254 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-host-run-netns\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.719339 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-multus-cni-dir\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.719349 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0dbb952e-adc7-460c-994c-5620183fe85f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.719350 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-cnibin\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.719563 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/06cb0758-b33b-4730-a341-cc78a072aa5f-cni-binary-copy\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.719830 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00463350-e27b-4e14-acee-d79ff4d8eda3-mcd-auth-proxy-config\") pod \"machine-config-daemon-bc7p2\" (UID: \"00463350-e27b-4e14-acee-d79ff4d8eda3\") " pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.718689 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06cb0758-b33b-4730-a341-cc78a072aa5f-etc-kubernetes\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.722526 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00463350-e27b-4e14-acee-d79ff4d8eda3-proxy-tls\") pod \"machine-config-daemon-bc7p2\" (UID: \"00463350-e27b-4e14-acee-d79ff4d8eda3\") " pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.726418 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.740529 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.741553 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5dzt\" (UniqueName: \"kubernetes.io/projected/06cb0758-b33b-4730-a341-cc78a072aa5f-kube-api-access-l5dzt\") pod \"multus-c998b\" (UID: \"06cb0758-b33b-4730-a341-cc78a072aa5f\") " pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.744842 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjmz2\" (UniqueName: \"kubernetes.io/projected/0dbb952e-adc7-460c-994c-5620183fe85f-kube-api-access-kjmz2\") pod \"multus-additional-cni-plugins-56rjt\" (UID: \"0dbb952e-adc7-460c-994c-5620183fe85f\") " pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.744897 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps4ng\" (UniqueName: \"kubernetes.io/projected/00463350-e27b-4e14-acee-d79ff4d8eda3-kube-api-access-ps4ng\") pod \"machine-config-daemon-bc7p2\" (UID: \"00463350-e27b-4e14-acee-d79ff4d8eda3\") " pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.760397 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.772254 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.787373 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.800361 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.814317 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.819229 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.819407 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.819462 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.819504 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.819547 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.819572 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.819618 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.819634 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.819663 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.819696 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:19.819677024 +0000 UTC m=+19.961506496 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.819702 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.819733 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:19.819712865 +0000 UTC m=+19.961542487 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.819790 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:19.819769217 +0000 UTC m=+19.961598689 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.819709 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.819890 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.819919 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.819992 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:19.819971802 +0000 UTC m=+19.961801274 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:18 crc kubenswrapper[4811]: E1203 00:06:18.820189 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:06:19.820141617 +0000 UTC m=+19.961971079 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.821709 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.824674 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.826096 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.838568 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.838648 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.846795 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-c998b" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.852331 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: W1203 00:06:18.854028 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00463350_e27b_4e14_acee_d79ff4d8eda3.slice/crio-f834706c4f2f8ac876cbe572e0fc269d608aebe8720c271c42e2a9fceb646da0 WatchSource:0}: Error finding container f834706c4f2f8ac876cbe572e0fc269d608aebe8720c271c42e2a9fceb646da0: Status 404 returned error can't find the container with id f834706c4f2f8ac876cbe572e0fc269d608aebe8720c271c42e2a9fceb646da0 Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.854687 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-56rjt" Dec 03 00:06:18 crc kubenswrapper[4811]: W1203 00:06:18.861822 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06cb0758_b33b_4730_a341_cc78a072aa5f.slice/crio-79767c271f14448bbab3a6af0f66e502f6b88b00e87f14fbe0304cd8925899a5 WatchSource:0}: Error finding container 79767c271f14448bbab3a6af0f66e502f6b88b00e87f14fbe0304cd8925899a5: Status 404 returned error can't find the container with id 79767c271f14448bbab3a6af0f66e502f6b88b00e87f14fbe0304cd8925899a5 Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.869165 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:18 crc kubenswrapper[4811]: W1203 00:06:18.873052 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dbb952e_adc7_460c_994c_5620183fe85f.slice/crio-7b3c25d07a4a5253c4b0464d3d68710b3bb57256d66551338fcd341ee405d99c WatchSource:0}: Error finding container 7b3c25d07a4a5253c4b0464d3d68710b3bb57256d66551338fcd341ee405d99c: Status 404 returned error can't find the container with id 7b3c25d07a4a5253c4b0464d3d68710b3bb57256d66551338fcd341ee405d99c Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.874033 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mjj8p"] Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.874858 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.878679 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.878928 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.878952 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.879364 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.879400 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.879652 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.880049 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.905652 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:18Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.951690 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:18Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.969379 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:18Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:18 crc kubenswrapper[4811]: I1203 00:06:18.985775 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:18Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.001288 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:18Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.018024 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021206 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-ovn\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021288 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-node-log\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021315 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovnkube-script-lib\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021350 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-systemd\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021369 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-openvswitch\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021384 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-log-socket\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021405 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-run-ovn-kubernetes\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021439 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovnkube-config\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021474 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovn-node-metrics-cert\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021493 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-kubelet\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021554 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-etc-openvswitch\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021576 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-systemd-units\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021687 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021766 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-env-overrides\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021796 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-run-netns\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021817 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-cni-bin\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021846 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-cni-netd\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.021910 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms7q7\" (UniqueName: \"kubernetes.io/projected/3e8d9251-ed38-4134-b62e-f9a34bf4c755-kube-api-access-ms7q7\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.022294 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-var-lib-openvswitch\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.022413 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-slash\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.038954 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.053971 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.070809 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.087578 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.101662 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.121598 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.122895 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-var-lib-openvswitch\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.122943 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-slash\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.122963 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-ovn\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.122981 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-node-log\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.122997 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovnkube-script-lib\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123012 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-systemd\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123025 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-openvswitch\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123040 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-log-socket\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123054 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-run-ovn-kubernetes\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123078 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovnkube-config\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123074 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-ovn\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123103 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovn-node-metrics-cert\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123120 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-kubelet\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123074 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-var-lib-openvswitch\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123136 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-etc-openvswitch\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123151 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-node-log\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123181 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-openvswitch\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123191 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-systemd-units\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123208 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-log-socket\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123154 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-systemd-units\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123246 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123304 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-env-overrides\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123322 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-run-netns\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123340 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-cni-bin\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123361 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-cni-netd\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123408 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms7q7\" (UniqueName: \"kubernetes.io/projected/3e8d9251-ed38-4134-b62e-f9a34bf4c755-kube-api-access-ms7q7\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123761 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123962 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovnkube-config\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.124069 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovnkube-script-lib\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.124118 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-etc-openvswitch\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.124148 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-kubelet\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.124176 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-cni-bin\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.124199 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-run-netns\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123125 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-systemd\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.124237 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-cni-netd\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.123163 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-run-ovn-kubernetes\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.124294 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-env-overrides\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.124353 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-slash\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.129705 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovn-node-metrics-cert\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.142314 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms7q7\" (UniqueName: \"kubernetes.io/projected/3e8d9251-ed38-4134-b62e-f9a34bf4c755-kube-api-access-ms7q7\") pod \"ovnkube-node-mjj8p\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.142747 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.187850 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.189664 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: W1203 00:06:19.205915 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e8d9251_ed38_4134_b62e_f9a34bf4c755.slice/crio-f63f7db3d655c69aa2517a8067d1e6c173166e1d6f03a9463ffc89e084553c00 WatchSource:0}: Error finding container f63f7db3d655c69aa2517a8067d1e6c173166e1d6f03a9463ffc89e084553c00: Status 404 returned error can't find the container with id f63f7db3d655c69aa2517a8067d1e6c173166e1d6f03a9463ffc89e084553c00 Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.242323 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.256929 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" event={"ID":"0dbb952e-adc7-460c-994c-5620183fe85f","Type":"ContainerStarted","Data":"74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.256972 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" event={"ID":"0dbb952e-adc7-460c-994c-5620183fe85f","Type":"ContainerStarted","Data":"7b3c25d07a4a5253c4b0464d3d68710b3bb57256d66551338fcd341ee405d99c"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.258054 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1edf8b16ea740d80aba76995ee42372d959286d23e39523f1a19d8750f3cf30c"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.260576 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.260920 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.260986 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.260998 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0a0848babf942b7907e192de599ec1ea5cfecf13be4d5970564f2efe36b7e5ac"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.265317 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerStarted","Data":"b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.265909 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerStarted","Data":"84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.265929 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerStarted","Data":"f834706c4f2f8ac876cbe572e0fc269d608aebe8720c271c42e2a9fceb646da0"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.268094 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.268139 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b724adc02bfda6d65afa8a763686dbda4cb405c34a94cafa937190d70e2b3623"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.272127 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerStarted","Data":"f63f7db3d655c69aa2517a8067d1e6c173166e1d6f03a9463ffc89e084553c00"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.274909 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.278084 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240" exitCode=255 Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.278196 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.282520 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c998b" event={"ID":"06cb0758-b33b-4730-a341-cc78a072aa5f","Type":"ContainerStarted","Data":"d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.282574 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c998b" event={"ID":"06cb0758-b33b-4730-a341-cc78a072aa5f","Type":"ContainerStarted","Data":"79767c271f14448bbab3a6af0f66e502f6b88b00e87f14fbe0304cd8925899a5"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.284776 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fl6vq" event={"ID":"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7","Type":"ContainerStarted","Data":"3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.284825 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fl6vq" event={"ID":"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7","Type":"ContainerStarted","Data":"357a30832639ad17f11ddc4968fdd7a316fc124df0e6fc69f1d2acd90b6599c4"} Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.290109 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.296685 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.296882 4811 scope.go:117] "RemoveContainer" containerID="75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.310561 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.335700 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.352410 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.366744 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.379419 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.400726 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.432123 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.473942 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.509678 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.568735 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.594010 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.638020 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.673991 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.709951 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.833305 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.833420 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.833452 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.833484 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:19 crc kubenswrapper[4811]: I1203 00:06:19.833514 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:19 crc kubenswrapper[4811]: E1203 00:06:19.833544 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:06:21.833511233 +0000 UTC m=+21.975340715 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:06:19 crc kubenswrapper[4811]: E1203 00:06:19.833616 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:19 crc kubenswrapper[4811]: E1203 00:06:19.833642 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:19 crc kubenswrapper[4811]: E1203 00:06:19.833647 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:19 crc kubenswrapper[4811]: E1203 00:06:19.833668 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:19 crc kubenswrapper[4811]: E1203 00:06:19.833677 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:19 crc kubenswrapper[4811]: E1203 00:06:19.833682 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:19 crc kubenswrapper[4811]: E1203 00:06:19.833691 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:19 crc kubenswrapper[4811]: E1203 00:06:19.833648 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:19 crc kubenswrapper[4811]: E1203 00:06:19.833691 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:21.833675887 +0000 UTC m=+21.975505359 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:19 crc kubenswrapper[4811]: E1203 00:06:19.833758 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:21.833745149 +0000 UTC m=+21.975574751 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:19 crc kubenswrapper[4811]: E1203 00:06:19.833781 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:21.833773169 +0000 UTC m=+21.975602771 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:19 crc kubenswrapper[4811]: E1203 00:06:19.833800 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:21.83379162 +0000 UTC m=+21.975621232 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.039750 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-pd6c8"] Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.040511 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pd6c8" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.044939 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.045078 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.045405 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.045498 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.056413 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.072054 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.085792 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.096703 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.113623 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.113935 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.114025 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.114090 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:20 crc kubenswrapper[4811]: E1203 00:06:20.114146 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:20 crc kubenswrapper[4811]: E1203 00:06:20.114033 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:20 crc kubenswrapper[4811]: E1203 00:06:20.114220 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.119564 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.120253 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.121071 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.121768 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.123602 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.124177 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.124862 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.125868 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.126506 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.127425 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.127983 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.130453 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.130735 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.131383 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.132429 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.133003 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.134573 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.135246 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.135689 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.136317 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35-host\") pod \"node-ca-pd6c8\" (UID: \"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\") " pod="openshift-image-registry/node-ca-pd6c8" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.136398 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtgjn\" (UniqueName: \"kubernetes.io/projected/9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35-kube-api-access-vtgjn\") pod \"node-ca-pd6c8\" (UID: \"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\") " pod="openshift-image-registry/node-ca-pd6c8" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.136525 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35-serviceca\") pod \"node-ca-pd6c8\" (UID: \"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\") " pod="openshift-image-registry/node-ca-pd6c8" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.137421 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.138061 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.138549 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.139574 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.139987 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.141086 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.141531 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.143442 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.144143 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.144756 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.147778 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.149960 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.150642 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.151664 4811 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.151786 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.153485 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.154504 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.155080 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.159863 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.160915 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.161486 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.162458 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.162908 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.163132 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.163692 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.164825 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.165815 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.166403 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.167181 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.168110 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.169530 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.170295 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.171147 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.171663 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.172104 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.173012 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.173610 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.174658 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.179002 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.195295 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.237188 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35-host\") pod \"node-ca-pd6c8\" (UID: \"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\") " pod="openshift-image-registry/node-ca-pd6c8" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.237239 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtgjn\" (UniqueName: \"kubernetes.io/projected/9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35-kube-api-access-vtgjn\") pod \"node-ca-pd6c8\" (UID: \"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\") " pod="openshift-image-registry/node-ca-pd6c8" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.237279 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35-serviceca\") pod \"node-ca-pd6c8\" (UID: \"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\") " pod="openshift-image-registry/node-ca-pd6c8" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.237362 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35-host\") pod \"node-ca-pd6c8\" (UID: \"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\") " pod="openshift-image-registry/node-ca-pd6c8" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.238896 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35-serviceca\") pod \"node-ca-pd6c8\" (UID: \"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\") " pod="openshift-image-registry/node-ca-pd6c8" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.243075 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.260622 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtgjn\" (UniqueName: \"kubernetes.io/projected/9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35-kube-api-access-vtgjn\") pod \"node-ca-pd6c8\" (UID: \"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\") " pod="openshift-image-registry/node-ca-pd6c8" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.289638 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.291689 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05"} Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.292168 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.293099 4811 generic.go:334] "Generic (PLEG): container finished" podID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerID="e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec" exitCode=0 Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.293190 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerDied","Data":"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec"} Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.293833 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.294818 4811 generic.go:334] "Generic (PLEG): container finished" podID="0dbb952e-adc7-460c-994c-5620183fe85f" containerID="74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099" exitCode=0 Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.294889 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" event={"ID":"0dbb952e-adc7-460c-994c-5620183fe85f","Type":"ContainerDied","Data":"74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099"} Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.336346 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.357767 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pd6c8" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.379544 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.411721 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.447802 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.491088 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.531795 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.581562 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.607548 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.657328 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.693933 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.736996 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.771297 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.811359 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.855430 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.891031 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.929948 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:20 crc kubenswrapper[4811]: I1203 00:06:20.972087 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.011390 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.051433 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.089456 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.091291 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.093149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.093184 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.093197 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.093376 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.149688 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.162295 4811 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.162623 4811 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.164113 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.164156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.164167 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.164184 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.164201 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:21Z","lastTransitionTime":"2025-12-03T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.182571 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.186543 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.186584 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.186596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.186615 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.186628 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:21Z","lastTransitionTime":"2025-12-03T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.198253 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.201662 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.201690 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.201699 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.201713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.201722 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:21Z","lastTransitionTime":"2025-12-03T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.211412 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.213720 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.217144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.217183 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.217194 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.217213 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.217229 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:21Z","lastTransitionTime":"2025-12-03T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.229649 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.233783 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.233831 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.233846 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.233864 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.233879 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:21Z","lastTransitionTime":"2025-12-03T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.251839 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.252041 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.254548 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.254590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.254601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.254619 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.254631 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:21Z","lastTransitionTime":"2025-12-03T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.260851 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.287201 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.304688 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerStarted","Data":"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.304756 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerStarted","Data":"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.304767 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerStarted","Data":"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.304778 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerStarted","Data":"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.304788 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerStarted","Data":"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.304801 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerStarted","Data":"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.306954 4811 generic.go:334] "Generic (PLEG): container finished" podID="0dbb952e-adc7-460c-994c-5620183fe85f" containerID="dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90" exitCode=0 Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.307030 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" event={"ID":"0dbb952e-adc7-460c-994c-5620183fe85f","Type":"ContainerDied","Data":"dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.309285 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pd6c8" event={"ID":"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35","Type":"ContainerStarted","Data":"6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.309313 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pd6c8" event={"ID":"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35","Type":"ContainerStarted","Data":"c084f9efcf523360491d2a02ba957b6064e904a6df6aacdf2f034573213bc651"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.330360 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.357952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.358168 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.358293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.358385 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.358463 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:21Z","lastTransitionTime":"2025-12-03T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.370613 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.410240 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.449525 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.461055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.461373 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.461414 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.461432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.461445 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:21Z","lastTransitionTime":"2025-12-03T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.490995 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.530234 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.563754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.563792 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.563801 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.563817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.563827 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:21Z","lastTransitionTime":"2025-12-03T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.569032 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.642085 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.666215 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.666248 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.666271 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.666474 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.666500 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:21Z","lastTransitionTime":"2025-12-03T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.673893 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.693459 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.735570 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.769571 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.769608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.769616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.769631 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.769641 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:21Z","lastTransitionTime":"2025-12-03T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.770417 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.809012 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.849890 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.859525 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.859622 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.859656 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.859690 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:06:25.859667691 +0000 UTC m=+26.001497163 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.859718 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.859764 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.859729 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.859845 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:25.859835576 +0000 UTC m=+26.001665048 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.859852 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.859856 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.859887 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:25.859876167 +0000 UTC m=+26.001705639 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.859805 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.859907 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.859915 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.859926 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.859948 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.859984 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:25.85997597 +0000 UTC m=+26.001805442 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:21 crc kubenswrapper[4811]: E1203 00:06:21.860003 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:25.85999316 +0000 UTC m=+26.001822632 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.871713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.871742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.871751 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.871764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.871773 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:21Z","lastTransitionTime":"2025-12-03T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.891967 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.930939 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.974678 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.974741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.974751 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.974776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.974790 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:21Z","lastTransitionTime":"2025-12-03T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:21 crc kubenswrapper[4811]: I1203 00:06:21.977228 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:21Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.009494 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.049172 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.077922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.077975 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.077986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.078011 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.078023 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:22Z","lastTransitionTime":"2025-12-03T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.089387 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.114929 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.115029 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:22 crc kubenswrapper[4811]: E1203 00:06:22.115104 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.114963 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:22 crc kubenswrapper[4811]: E1203 00:06:22.115231 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:22 crc kubenswrapper[4811]: E1203 00:06:22.115429 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.180668 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.180766 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.180781 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.180804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.180819 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:22Z","lastTransitionTime":"2025-12-03T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.289761 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.289816 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.289829 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.289860 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.289876 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:22Z","lastTransitionTime":"2025-12-03T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.321556 4811 generic.go:334] "Generic (PLEG): container finished" podID="0dbb952e-adc7-460c-994c-5620183fe85f" containerID="53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c" exitCode=0 Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.321677 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" event={"ID":"0dbb952e-adc7-460c-994c-5620183fe85f","Type":"ContainerDied","Data":"53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c"} Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.329550 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6"} Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.338412 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.353974 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.371599 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.387837 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.393207 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.393450 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.393465 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.393479 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.393516 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:22Z","lastTransitionTime":"2025-12-03T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.413002 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.429807 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.446137 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.459022 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.472042 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.487975 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.495947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.495982 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.495992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.496007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.496016 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:22Z","lastTransitionTime":"2025-12-03T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.529186 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.569719 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.598802 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.598838 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.598848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.598868 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.598880 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:22Z","lastTransitionTime":"2025-12-03T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.613055 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.648654 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.690296 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.701489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.701525 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.701539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.701557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.701570 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:22Z","lastTransitionTime":"2025-12-03T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.734585 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.770872 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.803991 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.804066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.804084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.804115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.804130 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:22Z","lastTransitionTime":"2025-12-03T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.817240 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.854476 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.893728 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.900719 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.906897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.906951 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.906962 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.906982 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.906994 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:22Z","lastTransitionTime":"2025-12-03T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.923985 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.936257 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.957132 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:22 crc kubenswrapper[4811]: I1203 00:06:22.992808 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.009008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.009052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.009062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.009080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.009090 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:23Z","lastTransitionTime":"2025-12-03T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.035322 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.072834 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.112388 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.112433 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.112441 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.112456 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.112465 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:23Z","lastTransitionTime":"2025-12-03T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.113704 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.150596 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.191546 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.214659 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.214712 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.214725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.214744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.214757 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:23Z","lastTransitionTime":"2025-12-03T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.229078 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.270128 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.314678 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.316497 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.316523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.316533 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.316547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.316556 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:23Z","lastTransitionTime":"2025-12-03T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.337383 4811 generic.go:334] "Generic (PLEG): container finished" podID="0dbb952e-adc7-460c-994c-5620183fe85f" containerID="b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412" exitCode=0 Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.337465 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" event={"ID":"0dbb952e-adc7-460c-994c-5620183fe85f","Type":"ContainerDied","Data":"b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412"} Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.344577 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerStarted","Data":"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9"} Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.350952 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.396210 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.421287 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.421329 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.421342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.421364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.421582 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:23Z","lastTransitionTime":"2025-12-03T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.438046 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.470786 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.518334 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.524930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.524979 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.524993 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.525018 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.525032 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:23Z","lastTransitionTime":"2025-12-03T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.554047 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.596717 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.629044 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.629130 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.629147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.629171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.629189 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:23Z","lastTransitionTime":"2025-12-03T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.631221 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.673203 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.711607 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.732735 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.732843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.732889 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.732907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.732931 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:23Z","lastTransitionTime":"2025-12-03T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.751347 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.791690 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.834723 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.836906 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.836949 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.836962 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.836980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.836992 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:23Z","lastTransitionTime":"2025-12-03T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.871086 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.911132 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.939764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.939810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.939819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.939831 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.939840 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:23Z","lastTransitionTime":"2025-12-03T00:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.950239 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:23 crc kubenswrapper[4811]: I1203 00:06:23.991676 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.038977 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.042966 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.043092 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.043128 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.043160 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.043184 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:24Z","lastTransitionTime":"2025-12-03T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.068922 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.115182 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.115368 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:24 crc kubenswrapper[4811]: E1203 00:06:24.115562 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.115614 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:24 crc kubenswrapper[4811]: E1203 00:06:24.115734 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:24 crc kubenswrapper[4811]: E1203 00:06:24.115815 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.117658 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.145557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.145598 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.145609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.145627 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.145640 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:24Z","lastTransitionTime":"2025-12-03T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.148531 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.189837 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.235518 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.248776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.248872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.248892 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.248917 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.248949 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:24Z","lastTransitionTime":"2025-12-03T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.286633 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.313781 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.351605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.352080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.352112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.352143 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.352168 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:24Z","lastTransitionTime":"2025-12-03T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.354494 4811 generic.go:334] "Generic (PLEG): container finished" podID="0dbb952e-adc7-460c-994c-5620183fe85f" containerID="2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968" exitCode=0 Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.354569 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" event={"ID":"0dbb952e-adc7-460c-994c-5620183fe85f","Type":"ContainerDied","Data":"2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968"} Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.357861 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.396970 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.441422 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.456084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.456146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.456158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.456176 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.456189 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:24Z","lastTransitionTime":"2025-12-03T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.471590 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.514872 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.553154 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.559883 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.559924 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.559935 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.559952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.559968 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:24Z","lastTransitionTime":"2025-12-03T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.588673 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.627691 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.662811 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.662891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.662920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.662952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.662975 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:24Z","lastTransitionTime":"2025-12-03T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.671017 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.711610 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.767139 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.767200 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.767217 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.767240 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.767272 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:24Z","lastTransitionTime":"2025-12-03T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.768899 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.787917 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.831469 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.870599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.870655 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.870664 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.870688 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.870704 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:24Z","lastTransitionTime":"2025-12-03T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.871554 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.915093 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.956141 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.974288 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.974335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.974351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.974369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.974381 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:24Z","lastTransitionTime":"2025-12-03T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:24 crc kubenswrapper[4811]: I1203 00:06:24.991646 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.032678 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.077534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.077596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.077610 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.077634 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.077647 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:25Z","lastTransitionTime":"2025-12-03T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.187479 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.187516 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.187527 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.187542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.187553 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:25Z","lastTransitionTime":"2025-12-03T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.290900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.290956 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.290968 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.290985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.290996 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:25Z","lastTransitionTime":"2025-12-03T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.362459 4811 generic.go:334] "Generic (PLEG): container finished" podID="0dbb952e-adc7-460c-994c-5620183fe85f" containerID="be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e" exitCode=0 Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.362530 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" event={"ID":"0dbb952e-adc7-460c-994c-5620183fe85f","Type":"ContainerDied","Data":"be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e"} Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.369420 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerStarted","Data":"a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a"} Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.370950 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.371004 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.379624 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.394805 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.395210 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.395472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.395493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.395542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.395557 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:25Z","lastTransitionTime":"2025-12-03T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.408821 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.409549 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.412027 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.425768 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.439244 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.461845 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.476574 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.489117 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.498639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.498673 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.498681 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.498695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.498705 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:25Z","lastTransitionTime":"2025-12-03T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.504005 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.519284 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.540998 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.551747 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.566982 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.589191 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.602177 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.602253 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.602619 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.602647 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.602938 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:25Z","lastTransitionTime":"2025-12-03T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.636993 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.681063 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.705859 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.705924 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.705937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.705960 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.705975 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:25Z","lastTransitionTime":"2025-12-03T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.708169 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.754676 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.795146 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.809200 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.809310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.809337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.809371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.809395 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:25Z","lastTransitionTime":"2025-12-03T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.832019 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.874664 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.909696 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.909845 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.909916 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.909964 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:25 crc kubenswrapper[4811]: E1203 00:06:25.910057 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:06:33.910014958 +0000 UTC m=+34.051844470 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:06:25 crc kubenswrapper[4811]: E1203 00:06:25.910081 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:25 crc kubenswrapper[4811]: E1203 00:06:25.910165 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:33.910140892 +0000 UTC m=+34.051970404 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:25 crc kubenswrapper[4811]: E1203 00:06:25.910181 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:25 crc kubenswrapper[4811]: E1203 00:06:25.910225 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.910229 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:25 crc kubenswrapper[4811]: E1203 00:06:25.910254 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:25 crc kubenswrapper[4811]: E1203 00:06:25.910183 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:25 crc kubenswrapper[4811]: E1203 00:06:25.910374 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:33.910346527 +0000 UTC m=+34.052176039 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:25 crc kubenswrapper[4811]: E1203 00:06:25.910444 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:33.910412318 +0000 UTC m=+34.052241790 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:25 crc kubenswrapper[4811]: E1203 00:06:25.910446 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:25 crc kubenswrapper[4811]: E1203 00:06:25.910474 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:25 crc kubenswrapper[4811]: E1203 00:06:25.910486 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:25 crc kubenswrapper[4811]: E1203 00:06:25.910549 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:33.910531862 +0000 UTC m=+34.052361334 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.912758 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.912804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.912815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.912832 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.912843 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:25Z","lastTransitionTime":"2025-12-03T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.917109 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.952707 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:25 crc kubenswrapper[4811]: I1203 00:06:25.994038 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.015899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.015969 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.015987 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.016012 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.016040 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:26Z","lastTransitionTime":"2025-12-03T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.032962 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.077415 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.111791 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.114059 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.114086 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.114086 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:26 crc kubenswrapper[4811]: E1203 00:06:26.114233 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:26 crc kubenswrapper[4811]: E1203 00:06:26.114339 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:26 crc kubenswrapper[4811]: E1203 00:06:26.114546 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.118027 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.118062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.118072 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.118086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.118097 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:26Z","lastTransitionTime":"2025-12-03T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.149123 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.195797 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.220180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.220222 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.220233 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.220249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.220286 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:26Z","lastTransitionTime":"2025-12-03T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.227747 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.323231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.323316 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.323337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.323365 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.323404 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:26Z","lastTransitionTime":"2025-12-03T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.378941 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.379085 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" event={"ID":"0dbb952e-adc7-460c-994c-5620183fe85f","Type":"ContainerStarted","Data":"7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5"} Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.412763 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.425870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.425927 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.425944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.425971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.425989 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:26Z","lastTransitionTime":"2025-12-03T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.432428 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.452833 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.475140 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.498970 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.528628 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.528690 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.528709 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.528737 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.528755 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:26Z","lastTransitionTime":"2025-12-03T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.530115 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.546394 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.569070 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.594785 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.632190 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.632248 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.632303 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.632333 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.632355 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:26Z","lastTransitionTime":"2025-12-03T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.636789 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.681200 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.714877 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.735374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.735437 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.735454 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.735480 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.735499 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:26Z","lastTransitionTime":"2025-12-03T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.754188 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.798728 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.839168 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.839225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.839243 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.839288 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.839299 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:26Z","lastTransitionTime":"2025-12-03T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.842219 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.941345 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.941388 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.941400 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.941417 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:26 crc kubenswrapper[4811]: I1203 00:06:26.941456 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:26Z","lastTransitionTime":"2025-12-03T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.044300 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.044379 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.044393 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.044417 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.044433 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:27Z","lastTransitionTime":"2025-12-03T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.167753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.167804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.167819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.167841 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.167856 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:27Z","lastTransitionTime":"2025-12-03T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.271016 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.271063 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.271074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.271120 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.271134 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:27Z","lastTransitionTime":"2025-12-03T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.374378 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.374433 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.374444 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.374465 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.374483 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:27Z","lastTransitionTime":"2025-12-03T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.381845 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.477400 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.477452 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.477460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.477477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.477490 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:27Z","lastTransitionTime":"2025-12-03T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.580574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.580624 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.580638 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.580658 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.580671 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:27Z","lastTransitionTime":"2025-12-03T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.684314 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.684350 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.684360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.684379 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.684391 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:27Z","lastTransitionTime":"2025-12-03T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.787463 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.787546 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.787556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.787581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.787593 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:27Z","lastTransitionTime":"2025-12-03T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.891064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.891153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.891188 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.891223 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.891243 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:27Z","lastTransitionTime":"2025-12-03T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.994376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.994463 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.994483 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.994515 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:27 crc kubenswrapper[4811]: I1203 00:06:27.994534 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:27Z","lastTransitionTime":"2025-12-03T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.097715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.097773 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.097791 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.097819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.097837 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:28Z","lastTransitionTime":"2025-12-03T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.114281 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.114282 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.114332 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:28 crc kubenswrapper[4811]: E1203 00:06:28.114460 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:28 crc kubenswrapper[4811]: E1203 00:06:28.114610 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:28 crc kubenswrapper[4811]: E1203 00:06:28.114744 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.201206 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.201640 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.201742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.201847 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.201938 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:28Z","lastTransitionTime":"2025-12-03T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.306970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.307660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.307746 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.307877 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.307968 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:28Z","lastTransitionTime":"2025-12-03T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.388927 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/0.log" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.392108 4811 generic.go:334] "Generic (PLEG): container finished" podID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerID="a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a" exitCode=1 Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.392168 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerDied","Data":"a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a"} Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.392995 4811 scope.go:117] "RemoveContainer" containerID="a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.410124 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.410187 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.410205 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.410230 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.410253 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:28Z","lastTransitionTime":"2025-12-03T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.410442 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:28Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.429509 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:28Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.444206 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:28Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.459215 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:28Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.476769 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:28Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.515022 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:28Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.516148 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.516290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.516379 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.516489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.516581 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:28Z","lastTransitionTime":"2025-12-03T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.528354 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:28Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.542432 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:28Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.555327 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:28Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.570595 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:28Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.590341 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:27Z\\\",\\\"message\\\":\\\"27.643599 6098 factory.go:656] Stopping watch factory\\\\nI1203 00:06:27.643639 6098 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 00:06:27.643670 6098 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:27.643726 6098 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.643853 6098 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643897 6098 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643868 6098 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.644051 6098 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.644312 6098 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:28Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.603895 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:28Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.619711 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.619750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.619766 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.619788 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.619802 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:28Z","lastTransitionTime":"2025-12-03T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.620352 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:28Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.635928 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:28Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.651216 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:28Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.723384 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.723436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.723446 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.723466 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.723482 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:28Z","lastTransitionTime":"2025-12-03T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.826807 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.826855 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.826869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.826891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.826906 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:28Z","lastTransitionTime":"2025-12-03T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.929362 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.929415 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.929425 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.929443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:28 crc kubenswrapper[4811]: I1203 00:06:28.929453 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:28Z","lastTransitionTime":"2025-12-03T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.031667 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.031724 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.031742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.031768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.031785 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:29Z","lastTransitionTime":"2025-12-03T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.133879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.133927 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.133939 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.133957 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.133969 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:29Z","lastTransitionTime":"2025-12-03T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.236990 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.237050 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.237061 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.237081 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.237126 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:29Z","lastTransitionTime":"2025-12-03T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.340651 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.340712 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.340725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.340746 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.340761 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:29Z","lastTransitionTime":"2025-12-03T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.398057 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/0.log" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.401284 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerStarted","Data":"ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c"} Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.401494 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.423908 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.438066 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.443099 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.443163 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.443178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.443204 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.443219 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:29Z","lastTransitionTime":"2025-12-03T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.456535 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.470383 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.489070 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.502958 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.514500 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.527178 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.538607 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.546021 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.546065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.546074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.546096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.546108 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:29Z","lastTransitionTime":"2025-12-03T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.552924 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.566842 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.580768 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.605858 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:27Z\\\",\\\"message\\\":\\\"27.643599 6098 factory.go:656] Stopping watch factory\\\\nI1203 00:06:27.643639 6098 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 00:06:27.643670 6098 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:27.643726 6098 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.643853 6098 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643897 6098 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643868 6098 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.644051 6098 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.644312 6098 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.616494 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.628320 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.648769 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.648807 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.648816 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.648833 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.648844 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:29Z","lastTransitionTime":"2025-12-03T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.751778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.751853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.751874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.751907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.751932 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:29Z","lastTransitionTime":"2025-12-03T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.855231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.855301 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.855313 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.855331 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.855346 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:29Z","lastTransitionTime":"2025-12-03T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.958722 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.958769 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.958780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.958800 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:29 crc kubenswrapper[4811]: I1203 00:06:29.958810 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:29Z","lastTransitionTime":"2025-12-03T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.061873 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.062027 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.062053 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.062089 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.062117 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:30Z","lastTransitionTime":"2025-12-03T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.117390 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:30 crc kubenswrapper[4811]: E1203 00:06:30.117551 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.117906 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:30 crc kubenswrapper[4811]: E1203 00:06:30.117970 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.118018 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:30 crc kubenswrapper[4811]: E1203 00:06:30.118063 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.138718 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.152406 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.165125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.165204 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.165216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.165236 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.165249 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:30Z","lastTransitionTime":"2025-12-03T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.169877 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.186485 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.212673 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.225774 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.242648 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.258163 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.268763 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.268800 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.268810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.268827 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.268839 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:30Z","lastTransitionTime":"2025-12-03T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.273696 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.294131 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.310019 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.332123 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.358899 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:27Z\\\",\\\"message\\\":\\\"27.643599 6098 factory.go:656] Stopping watch factory\\\\nI1203 00:06:27.643639 6098 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 00:06:27.643670 6098 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:27.643726 6098 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.643853 6098 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643897 6098 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643868 6098 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.644051 6098 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.644312 6098 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.371602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.371668 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.371678 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.371697 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.371710 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:30Z","lastTransitionTime":"2025-12-03T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.375417 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.391450 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.408047 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/1.log" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.409472 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/0.log" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.412839 4811 generic.go:334] "Generic (PLEG): container finished" podID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerID="ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c" exitCode=1 Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.412899 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerDied","Data":"ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c"} Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.412966 4811 scope.go:117] "RemoveContainer" containerID="a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.413824 4811 scope.go:117] "RemoveContainer" containerID="ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c" Dec 03 00:06:30 crc kubenswrapper[4811]: E1203 00:06:30.414025 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.434694 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:27Z\\\",\\\"message\\\":\\\"27.643599 6098 factory.go:656] Stopping watch factory\\\\nI1203 00:06:27.643639 6098 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 00:06:27.643670 6098 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:27.643726 6098 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.643853 6098 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643897 6098 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643868 6098 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.644051 6098 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.644312 6098 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:29Z\\\",\\\"message\\\":\\\"r removal\\\\nI1203 00:06:29.343390 6223 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:06:29.343395 6223 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 00:06:29.343462 6223 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1203 00:06:29.343471 6223 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1203 00:06:29.343536 6223 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1203 00:06:29.343589 6223 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:29.343600 6223 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:06:29.343607 6223 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 00:06:29.343616 6223 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:06:29.343624 6223 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 00:06:29.343631 6223 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1203 00:06:29.345131 6223 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 00:06:29.345179 6223 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 00:06:29.345246 6223 factory.go:656] Stopping watch factory\\\\nI1203 00:06:29.345286 6223 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:06:29.345295 6223 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 00:06:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.448767 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.462832 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.474359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.474421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.474437 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.474462 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.474481 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:30Z","lastTransitionTime":"2025-12-03T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.478339 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.492914 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.507642 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.520983 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.536039 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.552581 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.572767 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.578147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.578183 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.578195 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.578216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.578229 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:30Z","lastTransitionTime":"2025-12-03T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.589057 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.603005 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.615740 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.636501 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.647749 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.682045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.682104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.682114 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.682135 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.682146 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:30Z","lastTransitionTime":"2025-12-03T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.785452 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.785502 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.785513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.785533 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.785545 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:30Z","lastTransitionTime":"2025-12-03T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.889327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.889730 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.889810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.889902 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.889978 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:30Z","lastTransitionTime":"2025-12-03T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.992543 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.992586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.992597 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.992615 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:30 crc kubenswrapper[4811]: I1203 00:06:30.992628 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:30Z","lastTransitionTime":"2025-12-03T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.096421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.096469 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.096482 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.096506 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.096520 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:31Z","lastTransitionTime":"2025-12-03T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.199894 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.199946 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.199959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.199983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.199997 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:31Z","lastTransitionTime":"2025-12-03T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.303232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.303330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.303354 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.303387 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.303413 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:31Z","lastTransitionTime":"2025-12-03T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.405691 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.405737 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.405752 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.405772 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.405785 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:31Z","lastTransitionTime":"2025-12-03T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.423081 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/1.log" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.508803 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.508865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.508876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.508898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.508910 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:31Z","lastTransitionTime":"2025-12-03T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.528945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.528996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.529006 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.529027 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.529039 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:31Z","lastTransitionTime":"2025-12-03T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:31 crc kubenswrapper[4811]: E1203 00:06:31.545726 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.550474 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.550517 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.550527 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.550547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.550558 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:31Z","lastTransitionTime":"2025-12-03T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:31 crc kubenswrapper[4811]: E1203 00:06:31.564021 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.578196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.578244 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.578281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.578309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.578327 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:31Z","lastTransitionTime":"2025-12-03T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:31 crc kubenswrapper[4811]: E1203 00:06:31.592217 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.597471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.597528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.597550 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.597576 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.597594 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:31Z","lastTransitionTime":"2025-12-03T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:31 crc kubenswrapper[4811]: E1203 00:06:31.612555 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.616837 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.616891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.616904 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.616930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.616943 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:31Z","lastTransitionTime":"2025-12-03T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:31 crc kubenswrapper[4811]: E1203 00:06:31.635474 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:31Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:31 crc kubenswrapper[4811]: E1203 00:06:31.635648 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.637810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.637875 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.637895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.637965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.638056 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:31Z","lastTransitionTime":"2025-12-03T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.741621 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.741685 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.741700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.741724 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.741738 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:31Z","lastTransitionTime":"2025-12-03T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.844918 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.844966 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.844981 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.845001 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.845013 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:31Z","lastTransitionTime":"2025-12-03T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.948512 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.948661 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.948688 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.948713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:31 crc kubenswrapper[4811]: I1203 00:06:31.948734 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:31Z","lastTransitionTime":"2025-12-03T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.051693 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.051743 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.051756 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.051784 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.051800 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:32Z","lastTransitionTime":"2025-12-03T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.093422 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp"] Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.094010 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.096760 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.097070 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.114187 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:32 crc kubenswrapper[4811]: E1203 00:06:32.114419 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.114474 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:32 crc kubenswrapper[4811]: E1203 00:06:32.114817 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.114847 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:32 crc kubenswrapper[4811]: E1203 00:06:32.115019 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.121755 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.126822 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvhh2\" (UniqueName: \"kubernetes.io/projected/3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5-kube-api-access-wvhh2\") pod \"ovnkube-control-plane-749d76644c-m46wp\" (UID: \"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.127219 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-m46wp\" (UID: \"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.127636 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-m46wp\" (UID: \"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.127907 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-m46wp\" (UID: \"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.139757 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.155359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.155433 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.155454 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.155483 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.155503 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:32Z","lastTransitionTime":"2025-12-03T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.162463 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.179496 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.220303 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.229034 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-m46wp\" (UID: \"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.229091 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-m46wp\" (UID: \"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.229130 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvhh2\" (UniqueName: \"kubernetes.io/projected/3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5-kube-api-access-wvhh2\") pod \"ovnkube-control-plane-749d76644c-m46wp\" (UID: \"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.229171 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-m46wp\" (UID: \"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.229956 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-m46wp\" (UID: \"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.229989 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-m46wp\" (UID: \"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.234892 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.237216 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-m46wp\" (UID: \"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.249125 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvhh2\" (UniqueName: \"kubernetes.io/projected/3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5-kube-api-access-wvhh2\") pod \"ovnkube-control-plane-749d76644c-m46wp\" (UID: \"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.250517 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.259761 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.259831 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.259842 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.259909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.259923 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:32Z","lastTransitionTime":"2025-12-03T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.271649 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.286143 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.301554 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.316783 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.331964 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.350285 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.363180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.363224 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.363238 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.363280 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.363296 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:32Z","lastTransitionTime":"2025-12-03T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.381497 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:27Z\\\",\\\"message\\\":\\\"27.643599 6098 factory.go:656] Stopping watch factory\\\\nI1203 00:06:27.643639 6098 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 00:06:27.643670 6098 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:27.643726 6098 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.643853 6098 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643897 6098 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643868 6098 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.644051 6098 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.644312 6098 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:29Z\\\",\\\"message\\\":\\\"r removal\\\\nI1203 00:06:29.343390 6223 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:06:29.343395 6223 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 00:06:29.343462 6223 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1203 00:06:29.343471 6223 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1203 00:06:29.343536 6223 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1203 00:06:29.343589 6223 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:29.343600 6223 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:06:29.343607 6223 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 00:06:29.343616 6223 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:06:29.343624 6223 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 00:06:29.343631 6223 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1203 00:06:29.345131 6223 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 00:06:29.345179 6223 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 00:06:29.345246 6223 factory.go:656] Stopping watch factory\\\\nI1203 00:06:29.345286 6223 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:06:29.345295 6223 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 00:06:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.401642 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.415750 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.419424 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:32Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:32 crc kubenswrapper[4811]: W1203 00:06:32.431890 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d51e76d_e9e8_46ee_b4bf_4e2306d34ac5.slice/crio-1af62de31627f8d4432dce65486fd5b7a473d0b49ceff6b03229acc5bbf497de WatchSource:0}: Error finding container 1af62de31627f8d4432dce65486fd5b7a473d0b49ceff6b03229acc5bbf497de: Status 404 returned error can't find the container with id 1af62de31627f8d4432dce65486fd5b7a473d0b49ceff6b03229acc5bbf497de Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.467452 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.467499 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.467519 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.467543 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.467561 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:32Z","lastTransitionTime":"2025-12-03T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.575472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.575525 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.575536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.575562 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.575576 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:32Z","lastTransitionTime":"2025-12-03T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.677789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.677849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.677865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.677885 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.677899 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:32Z","lastTransitionTime":"2025-12-03T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.781003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.781056 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.781069 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.781088 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.781099 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:32Z","lastTransitionTime":"2025-12-03T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.883793 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.883836 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.883845 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.883864 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.883878 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:32Z","lastTransitionTime":"2025-12-03T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.986621 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.986708 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.986734 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.986768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:32 crc kubenswrapper[4811]: I1203 00:06:32.986792 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:32Z","lastTransitionTime":"2025-12-03T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.089741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.089793 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.089804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.089823 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.089834 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:33Z","lastTransitionTime":"2025-12-03T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.192428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.192477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.192489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.192509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.192521 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:33Z","lastTransitionTime":"2025-12-03T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.281453 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5w9pv"] Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.282136 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.282225 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.295624 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.295684 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.295700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.295723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.295737 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:33Z","lastTransitionTime":"2025-12-03T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.307627 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.321775 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.341397 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.341754 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58tt6\" (UniqueName: \"kubernetes.io/projected/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-kube-api-access-58tt6\") pod \"network-metrics-daemon-5w9pv\" (UID: \"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\") " pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.341873 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs\") pod \"network-metrics-daemon-5w9pv\" (UID: \"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\") " pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.358856 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.381750 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:27Z\\\",\\\"message\\\":\\\"27.643599 6098 factory.go:656] Stopping watch factory\\\\nI1203 00:06:27.643639 6098 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 00:06:27.643670 6098 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:27.643726 6098 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.643853 6098 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643897 6098 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643868 6098 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.644051 6098 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.644312 6098 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:29Z\\\",\\\"message\\\":\\\"r removal\\\\nI1203 00:06:29.343390 6223 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:06:29.343395 6223 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 00:06:29.343462 6223 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1203 00:06:29.343471 6223 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1203 00:06:29.343536 6223 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1203 00:06:29.343589 6223 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:29.343600 6223 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:06:29.343607 6223 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 00:06:29.343616 6223 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:06:29.343624 6223 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 00:06:29.343631 6223 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1203 00:06:29.345131 6223 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 00:06:29.345179 6223 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 00:06:29.345246 6223 factory.go:656] Stopping watch factory\\\\nI1203 00:06:29.345286 6223 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:06:29.345295 6223 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 00:06:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.395923 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.398645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.398687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.398700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.398719 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.398731 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:33Z","lastTransitionTime":"2025-12-03T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.410684 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.426958 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.437733 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" event={"ID":"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5","Type":"ContainerStarted","Data":"8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d"} Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.437815 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" event={"ID":"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5","Type":"ContainerStarted","Data":"dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c"} Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.437833 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" event={"ID":"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5","Type":"ContainerStarted","Data":"1af62de31627f8d4432dce65486fd5b7a473d0b49ceff6b03229acc5bbf497de"} Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.442762 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58tt6\" (UniqueName: \"kubernetes.io/projected/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-kube-api-access-58tt6\") pod \"network-metrics-daemon-5w9pv\" (UID: \"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\") " pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.442848 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs\") pod \"network-metrics-daemon-5w9pv\" (UID: \"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\") " pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.443006 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.443070 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs podName:ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c nodeName:}" failed. No retries permitted until 2025-12-03 00:06:33.943053541 +0000 UTC m=+34.084883033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs") pod "network-metrics-daemon-5w9pv" (UID: "ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.445034 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.460759 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.464648 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58tt6\" (UniqueName: \"kubernetes.io/projected/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-kube-api-access-58tt6\") pod \"network-metrics-daemon-5w9pv\" (UID: \"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\") " pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.478795 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.496390 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.501559 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.501610 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.501630 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.501657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.501678 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:33Z","lastTransitionTime":"2025-12-03T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.512048 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.529782 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.548230 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.561978 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.575814 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.598249 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.604079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.604172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.604194 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.604225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.604247 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:33Z","lastTransitionTime":"2025-12-03T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.611555 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.626887 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.646429 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.662283 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.682415 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:27Z\\\",\\\"message\\\":\\\"27.643599 6098 factory.go:656] Stopping watch factory\\\\nI1203 00:06:27.643639 6098 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 00:06:27.643670 6098 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:27.643726 6098 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.643853 6098 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643897 6098 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643868 6098 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.644051 6098 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.644312 6098 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:29Z\\\",\\\"message\\\":\\\"r removal\\\\nI1203 00:06:29.343390 6223 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:06:29.343395 6223 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 00:06:29.343462 6223 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1203 00:06:29.343471 6223 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1203 00:06:29.343536 6223 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1203 00:06:29.343589 6223 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:29.343600 6223 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:06:29.343607 6223 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 00:06:29.343616 6223 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:06:29.343624 6223 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 00:06:29.343631 6223 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1203 00:06:29.345131 6223 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 00:06:29.345179 6223 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 00:06:29.345246 6223 factory.go:656] Stopping watch factory\\\\nI1203 00:06:29.345286 6223 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:06:29.345295 6223 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 00:06:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.696681 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.707541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.707582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.707591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.707608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.707620 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:33Z","lastTransitionTime":"2025-12-03T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.716294 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.737524 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.754031 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.765810 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.778625 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.790136 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.802623 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.810944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.810996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.811008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.811029 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.811044 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:33Z","lastTransitionTime":"2025-12-03T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.815723 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.829249 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.846011 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.914086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.914203 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.914227 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.914314 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.914346 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:33Z","lastTransitionTime":"2025-12-03T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.949047 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.949244 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.949340 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.949481 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.949621 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.949690 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.949711 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.949475 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:06:49.949335254 +0000 UTC m=+50.091164866 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.949869 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs\") pod \"network-metrics-daemon-5w9pv\" (UID: \"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\") " pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.949953 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.949994 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:49.949914929 +0000 UTC m=+50.091744391 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.950073 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:49.950053722 +0000 UTC m=+50.091883194 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.950136 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.950239 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.950291 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs podName:ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c nodeName:}" failed. No retries permitted until 2025-12-03 00:06:34.950232778 +0000 UTC m=+35.092062260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs") pod "network-metrics-daemon-5w9pv" (UID: "ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:33 crc kubenswrapper[4811]: I1203 00:06:33.950410 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.950493 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:49.950473044 +0000 UTC m=+50.092302516 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.950522 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.950548 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.950566 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:33 crc kubenswrapper[4811]: E1203 00:06:33.950623 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:49.950609597 +0000 UTC m=+50.092439249 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.016484 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.016554 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.016572 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.016599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.016618 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:34Z","lastTransitionTime":"2025-12-03T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.118063 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.118193 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:34 crc kubenswrapper[4811]: E1203 00:06:34.118302 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:34 crc kubenswrapper[4811]: E1203 00:06:34.118454 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.118756 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:34 crc kubenswrapper[4811]: E1203 00:06:34.118877 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.120015 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.120126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.120193 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.120289 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.120371 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:34Z","lastTransitionTime":"2025-12-03T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.223766 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.224168 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.224278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.224350 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.224408 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:34Z","lastTransitionTime":"2025-12-03T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.327552 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.327649 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.327679 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.327708 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.327729 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:34Z","lastTransitionTime":"2025-12-03T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.431356 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.431448 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.431467 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.431495 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.431523 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:34Z","lastTransitionTime":"2025-12-03T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.535173 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.535313 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.535340 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.535374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.535399 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:34Z","lastTransitionTime":"2025-12-03T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.638119 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.638180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.638191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.638213 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.638226 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:34Z","lastTransitionTime":"2025-12-03T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.740772 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.740812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.740820 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.740839 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.740851 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:34Z","lastTransitionTime":"2025-12-03T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.844178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.844242 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.844286 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.844318 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.844335 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:34Z","lastTransitionTime":"2025-12-03T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.948008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.948044 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.948054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.948071 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.948080 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:34Z","lastTransitionTime":"2025-12-03T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:34 crc kubenswrapper[4811]: I1203 00:06:34.960009 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs\") pod \"network-metrics-daemon-5w9pv\" (UID: \"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\") " pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:34 crc kubenswrapper[4811]: E1203 00:06:34.960212 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:34 crc kubenswrapper[4811]: E1203 00:06:34.960372 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs podName:ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c nodeName:}" failed. No retries permitted until 2025-12-03 00:06:36.9603397 +0000 UTC m=+37.102169212 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs") pod "network-metrics-daemon-5w9pv" (UID: "ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.051092 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.051135 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.051146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.051163 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.051176 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:35Z","lastTransitionTime":"2025-12-03T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.088977 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.113751 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:27Z\\\",\\\"message\\\":\\\"27.643599 6098 factory.go:656] Stopping watch factory\\\\nI1203 00:06:27.643639 6098 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 00:06:27.643670 6098 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:27.643726 6098 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.643853 6098 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643897 6098 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643868 6098 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.644051 6098 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.644312 6098 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:29Z\\\",\\\"message\\\":\\\"r removal\\\\nI1203 00:06:29.343390 6223 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:06:29.343395 6223 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 00:06:29.343462 6223 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1203 00:06:29.343471 6223 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1203 00:06:29.343536 6223 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1203 00:06:29.343589 6223 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:29.343600 6223 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:06:29.343607 6223 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 00:06:29.343616 6223 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:06:29.343624 6223 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 00:06:29.343631 6223 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1203 00:06:29.345131 6223 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 00:06:29.345179 6223 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 00:06:29.345246 6223 factory.go:656] Stopping watch factory\\\\nI1203 00:06:29.345286 6223 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:06:29.345295 6223 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 00:06:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.114680 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:35 crc kubenswrapper[4811]: E1203 00:06:35.114894 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.129566 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.146221 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.155759 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.155815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.155829 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.155852 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.155868 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:35Z","lastTransitionTime":"2025-12-03T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.163564 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.176943 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.190772 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.208458 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.228512 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.245308 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.257168 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.258365 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.258399 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.258412 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.258428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.258440 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:35Z","lastTransitionTime":"2025-12-03T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.268222 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.281342 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.297946 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.314379 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.327546 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.349697 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.361510 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.361567 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.361584 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.361605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.361618 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:35Z","lastTransitionTime":"2025-12-03T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.364202 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.464278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.464323 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.464335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.464351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.464362 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:35Z","lastTransitionTime":"2025-12-03T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.567287 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.567369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.567388 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.567420 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.567441 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:35Z","lastTransitionTime":"2025-12-03T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.670780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.670817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.670827 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.670843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.670853 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:35Z","lastTransitionTime":"2025-12-03T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.773755 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.773822 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.773843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.773872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.773894 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:35Z","lastTransitionTime":"2025-12-03T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.880535 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.880658 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.880669 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.880695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.880706 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:35Z","lastTransitionTime":"2025-12-03T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.984275 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.984317 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.984327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.984345 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:35 crc kubenswrapper[4811]: I1203 00:06:35.984355 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:35Z","lastTransitionTime":"2025-12-03T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.087407 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.087442 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.087450 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.087462 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.087473 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:36Z","lastTransitionTime":"2025-12-03T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.114595 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.114702 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.114648 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:36 crc kubenswrapper[4811]: E1203 00:06:36.114938 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:36 crc kubenswrapper[4811]: E1203 00:06:36.115070 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:36 crc kubenswrapper[4811]: E1203 00:06:36.115152 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.190185 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.190239 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.190255 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.190318 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.190335 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:36Z","lastTransitionTime":"2025-12-03T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.294193 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.294592 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.294691 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.294759 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.294839 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:36Z","lastTransitionTime":"2025-12-03T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.397472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.397529 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.397541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.397557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.397570 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:36Z","lastTransitionTime":"2025-12-03T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.499789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.499840 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.500064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.500082 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.500103 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:36Z","lastTransitionTime":"2025-12-03T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.603022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.603066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.603077 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.603092 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.603101 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:36Z","lastTransitionTime":"2025-12-03T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.705754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.705799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.705810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.705847 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.705859 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:36Z","lastTransitionTime":"2025-12-03T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.808854 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.808928 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.808952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.808981 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.809001 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:36Z","lastTransitionTime":"2025-12-03T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.911225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.911284 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.911296 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.911312 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.911325 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:36Z","lastTransitionTime":"2025-12-03T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:36 crc kubenswrapper[4811]: I1203 00:06:36.982713 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs\") pod \"network-metrics-daemon-5w9pv\" (UID: \"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\") " pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:36 crc kubenswrapper[4811]: E1203 00:06:36.982858 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:36 crc kubenswrapper[4811]: E1203 00:06:36.982918 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs podName:ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c nodeName:}" failed. No retries permitted until 2025-12-03 00:06:40.982897746 +0000 UTC m=+41.124727218 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs") pod "network-metrics-daemon-5w9pv" (UID: "ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.014826 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.014879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.014901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.014929 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.014951 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:37Z","lastTransitionTime":"2025-12-03T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.114617 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:37 crc kubenswrapper[4811]: E1203 00:06:37.114822 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.322873 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.322938 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.322959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.322983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.322997 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:37Z","lastTransitionTime":"2025-12-03T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.426624 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.426701 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.426716 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.426743 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.426769 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:37Z","lastTransitionTime":"2025-12-03T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.529464 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.530109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.530194 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.530341 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.530475 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:37Z","lastTransitionTime":"2025-12-03T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.634306 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.634392 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.634432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.634457 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.634473 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:37Z","lastTransitionTime":"2025-12-03T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.737720 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.738342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.738381 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.738403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.738417 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:37Z","lastTransitionTime":"2025-12-03T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.841702 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.841750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.841759 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.841777 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.841788 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:37Z","lastTransitionTime":"2025-12-03T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.944803 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.944895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.944906 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.944926 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:37 crc kubenswrapper[4811]: I1203 00:06:37.944958 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:37Z","lastTransitionTime":"2025-12-03T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.048341 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.048386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.048405 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.048428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.048440 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.114164 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.114308 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:38 crc kubenswrapper[4811]: E1203 00:06:38.114373 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.114478 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:38 crc kubenswrapper[4811]: E1203 00:06:38.114672 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:38 crc kubenswrapper[4811]: E1203 00:06:38.114839 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.151326 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.151390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.151408 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.151432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.151450 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.254816 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.254873 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.254882 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.254900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.254913 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.357828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.357889 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.357901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.357922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.357936 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.460553 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.460628 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.460645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.460665 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.460679 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.563117 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.563160 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.563170 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.563189 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.563200 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.665839 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.665909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.665925 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.665945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.665959 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.769005 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.769069 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.769080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.769098 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.769109 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.871809 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.871863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.871879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.871904 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.871919 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.975114 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.975177 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.975194 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.975218 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4811]: I1203 00:06:38.975231 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.084104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.084205 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.084221 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.084245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.084276 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.114637 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:39 crc kubenswrapper[4811]: E1203 00:06:39.114815 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.187158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.187202 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.187214 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.187233 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.187249 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.289973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.290033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.290046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.290074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.290089 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.393466 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.393554 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.393575 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.393607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.393628 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.496547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.496605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.496619 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.496789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.496807 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.599897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.599942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.599951 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.599968 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.599978 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.702995 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.703032 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.703041 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.703062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.703075 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.806692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.806761 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.806795 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.806815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.806826 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.910253 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.910325 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.910335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.910368 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4811]: I1203 00:06:39.910382 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.013613 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.013667 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.013679 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.013732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.013746 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.114683 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.114794 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.114700 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:40 crc kubenswrapper[4811]: E1203 00:06:40.114922 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:40 crc kubenswrapper[4811]: E1203 00:06:40.115055 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:40 crc kubenswrapper[4811]: E1203 00:06:40.115231 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.117833 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.117880 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.117894 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.117913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.117929 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.136441 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.154053 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.171022 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.194546 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.215026 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.220080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.220134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.220154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.220180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.220197 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.235533 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.250533 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.273733 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.284756 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.297609 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.312063 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.322403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.322446 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.322460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.322482 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.322500 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.328525 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.344642 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.363828 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.382956 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.402765 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a73527785cd9c75d7a06d80ee2af7344fdbc5171485c96b012d50819c2ff616a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:27Z\\\",\\\"message\\\":\\\"27.643599 6098 factory.go:656] Stopping watch factory\\\\nI1203 00:06:27.643639 6098 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 00:06:27.643670 6098 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:27.643726 6098 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.643853 6098 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643897 6098 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.643868 6098 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:06:27.644051 6098 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:06:27.644312 6098 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:29Z\\\",\\\"message\\\":\\\"r removal\\\\nI1203 00:06:29.343390 6223 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:06:29.343395 6223 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 00:06:29.343462 6223 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1203 00:06:29.343471 6223 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1203 00:06:29.343536 6223 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1203 00:06:29.343589 6223 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:29.343600 6223 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:06:29.343607 6223 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 00:06:29.343616 6223 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:06:29.343624 6223 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 00:06:29.343631 6223 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1203 00:06:29.345131 6223 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 00:06:29.345179 6223 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 00:06:29.345246 6223 factory.go:656] Stopping watch factory\\\\nI1203 00:06:29.345286 6223 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:06:29.345295 6223 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 00:06:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.413832 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.425534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.425643 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.425661 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.425690 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.425736 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.530332 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.530397 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.530417 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.530443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.530462 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.633786 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.633837 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.633849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.633869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.633883 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.737157 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.737235 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.737251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.737305 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.737325 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.840073 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.840124 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.840136 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.840156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.840168 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.942595 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.942634 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.942643 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.942660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4811]: I1203 00:06:40.942672 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.029573 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs\") pod \"network-metrics-daemon-5w9pv\" (UID: \"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\") " pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:41 crc kubenswrapper[4811]: E1203 00:06:41.029814 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:41 crc kubenswrapper[4811]: E1203 00:06:41.030084 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs podName:ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c nodeName:}" failed. No retries permitted until 2025-12-03 00:06:49.029870225 +0000 UTC m=+49.171699697 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs") pod "network-metrics-daemon-5w9pv" (UID: "ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.046039 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.046089 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.046105 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.046123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.046136 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.114553 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:41 crc kubenswrapper[4811]: E1203 00:06:41.114822 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.116014 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.120933 4811 scope.go:117] "RemoveContainer" containerID="ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.139800 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.152013 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.152425 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.152439 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.152459 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.152470 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.161197 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.176418 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.193047 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.220588 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.234299 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.250630 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.255532 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.255557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.255566 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.255580 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.255590 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.270017 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.291898 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:29Z\\\",\\\"message\\\":\\\"r removal\\\\nI1203 00:06:29.343390 6223 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:06:29.343395 6223 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 00:06:29.343462 6223 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1203 00:06:29.343471 6223 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1203 00:06:29.343536 6223 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1203 00:06:29.343589 6223 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:29.343600 6223 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:06:29.343607 6223 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 00:06:29.343616 6223 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:06:29.343624 6223 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 00:06:29.343631 6223 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1203 00:06:29.345131 6223 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 00:06:29.345179 6223 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 00:06:29.345246 6223 factory.go:656] Stopping watch factory\\\\nI1203 00:06:29.345286 6223 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:06:29.345295 6223 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 00:06:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.301882 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.316752 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.331417 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.343883 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.357579 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.358109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.358137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.358146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.358166 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.358177 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.373105 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.386005 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.397691 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.460691 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.460729 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.460741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.460757 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.460770 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.470888 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/1.log" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.473113 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerStarted","Data":"87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359"} Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.474071 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.491448 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.508670 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.520628 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.534817 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.546408 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.559888 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.567486 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.567522 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.567539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.567561 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.567574 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.581695 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.609362 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.623485 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.648147 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.667307 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:29Z\\\",\\\"message\\\":\\\"r removal\\\\nI1203 00:06:29.343390 6223 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:06:29.343395 6223 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 00:06:29.343462 6223 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1203 00:06:29.343471 6223 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1203 00:06:29.343536 6223 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1203 00:06:29.343589 6223 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:29.343600 6223 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:06:29.343607 6223 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 00:06:29.343616 6223 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:06:29.343624 6223 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 00:06:29.343631 6223 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1203 00:06:29.345131 6223 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 00:06:29.345179 6223 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 00:06:29.345246 6223 factory.go:656] Stopping watch factory\\\\nI1203 00:06:29.345286 6223 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:06:29.345295 6223 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 00:06:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.670461 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.670484 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.670491 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.670505 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.670514 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.679016 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.692554 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.705553 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.719229 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.731856 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.745619 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.772547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.772602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.772616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.772636 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.772649 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.875324 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.875584 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.875653 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.875722 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.875817 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.903003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.903623 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.903723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.903812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.903896 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4811]: E1203 00:06:41.920067 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.925443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.925514 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.925533 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.925559 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.925577 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4811]: E1203 00:06:41.945992 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.950898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.951153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.951178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.951210 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.951230 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4811]: E1203 00:06:41.966454 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.972004 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.972202 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.972337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.972435 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.972539 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4811]: E1203 00:06:41.987643 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.992003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.992041 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.992054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.992070 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4811]: I1203 00:06:41.992082 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4811]: E1203 00:06:42.008823 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: E1203 00:06:42.008939 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.010825 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.010867 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.010880 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.010897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.010906 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.113335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.113397 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.113407 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.113421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.113431 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.113875 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:42 crc kubenswrapper[4811]: E1203 00:06:42.113965 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.114030 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.114043 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:42 crc kubenswrapper[4811]: E1203 00:06:42.114117 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:42 crc kubenswrapper[4811]: E1203 00:06:42.114306 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.216115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.216202 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.216223 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.216252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.216311 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.319520 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.319569 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.319581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.319600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.319611 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.422949 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.423199 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.423212 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.423233 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.423250 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.478448 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/2.log" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.479076 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/1.log" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.481612 4811 generic.go:334] "Generic (PLEG): container finished" podID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerID="87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359" exitCode=1 Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.481657 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerDied","Data":"87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359"} Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.481695 4811 scope.go:117] "RemoveContainer" containerID="ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.482463 4811 scope.go:117] "RemoveContainer" containerID="87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359" Dec 03 00:06:42 crc kubenswrapper[4811]: E1203 00:06:42.482631 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.498522 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.511960 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.524850 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.525884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.525930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.525941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.526134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.526145 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.543016 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.564400 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.592220 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2c2aff3d7a047ad9c1853effa9510267f41d56a31322ceabae4318f9d67e4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:29Z\\\",\\\"message\\\":\\\"r removal\\\\nI1203 00:06:29.343390 6223 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:06:29.343395 6223 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 00:06:29.343462 6223 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1203 00:06:29.343471 6223 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1203 00:06:29.343536 6223 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1203 00:06:29.343589 6223 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 00:06:29.343600 6223 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:06:29.343607 6223 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 00:06:29.343616 6223 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:06:29.343624 6223 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 00:06:29.343631 6223 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1203 00:06:29.345131 6223 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 00:06:29.345179 6223 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 00:06:29.345246 6223 factory.go:656] Stopping watch factory\\\\nI1203 00:06:29.345286 6223 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:06:29.345295 6223 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 00:06:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1203 00:06:42.023793 6430 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1203 00:06:42.023724 6430 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 00:06:42.023831 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.604665 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.621677 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.629070 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.629132 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.629145 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.629169 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.629180 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.640895 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.656055 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.670842 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.687242 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.701585 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.714439 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.731483 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.732601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.732653 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.732666 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.732691 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.732705 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.748893 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.790254 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.834672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.834708 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.834719 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.834735 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.834748 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.937090 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.937147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.937160 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.937234 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4811]: I1203 00:06:42.937247 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.040832 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.040916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.040938 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.040963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.040979 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.114648 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:43 crc kubenswrapper[4811]: E1203 00:06:43.114820 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.143898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.143967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.143990 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.144020 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.144042 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.247217 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.247302 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.247315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.247334 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.247348 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.350850 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.350925 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.350937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.350963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.350974 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.453697 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.453747 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.453759 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.453781 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.453796 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.487441 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/2.log" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.494099 4811 scope.go:117] "RemoveContainer" containerID="87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359" Dec 03 00:06:43 crc kubenswrapper[4811]: E1203 00:06:43.494329 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.513479 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.532707 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.546943 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.556592 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.556647 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.556659 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.556680 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.556693 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.560893 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.576131 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.590832 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.607294 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.628699 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1203 00:06:42.023793 6430 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1203 00:06:42.023724 6430 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 00:06:42.023831 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.644150 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.658864 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.660360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.660429 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.660447 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.660472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.660487 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.672471 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.689936 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.704580 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.718946 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.732307 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.753604 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.763156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.763212 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.763226 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.763247 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.763275 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.767399 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.865836 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.865881 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.865890 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.865931 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.865942 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.968877 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.968930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.968941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.968959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4811]: I1203 00:06:43.968972 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.072485 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.072556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.072577 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.072608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.072627 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.115040 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.115127 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.115206 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:44 crc kubenswrapper[4811]: E1203 00:06:44.115379 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:44 crc kubenswrapper[4811]: E1203 00:06:44.115501 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:44 crc kubenswrapper[4811]: E1203 00:06:44.115733 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.188538 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.188608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.188622 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.188644 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.188657 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.291308 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.291371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.291382 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.291405 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.291419 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.394723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.394796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.394820 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.394847 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.394868 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.497039 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.497085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.497098 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.497115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.497127 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.600167 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.600246 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.600300 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.600339 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.600360 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.704006 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.704077 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.704094 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.704121 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.704139 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.807054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.807126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.807150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.807180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.807202 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.910032 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.910107 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.910133 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.910160 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4811]: I1203 00:06:44.910178 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.013018 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.013132 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.013151 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.013177 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.013195 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.114035 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:45 crc kubenswrapper[4811]: E1203 00:06:45.114202 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.117199 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.117230 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.117240 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.117284 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.117296 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.219621 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.220041 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.220155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.220298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.220464 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.323983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.324076 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.324096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.324123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.324142 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.427423 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.427730 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.427747 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.427768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.427781 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.530049 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.530097 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.530107 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.530126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.530136 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.633330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.633405 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.633429 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.633458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.633479 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.736728 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.736797 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.736811 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.736833 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.736843 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.839994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.840054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.840066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.840088 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.840103 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.942917 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.942959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.942973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.942989 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4811]: I1203 00:06:45.942999 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.045827 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.045871 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.045883 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.045900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.045913 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.114309 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.114386 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.114405 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:46 crc kubenswrapper[4811]: E1203 00:06:46.114478 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:46 crc kubenswrapper[4811]: E1203 00:06:46.114594 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:46 crc kubenswrapper[4811]: E1203 00:06:46.114746 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.148192 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.148329 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.148353 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.148422 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.148443 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.252073 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.252137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.252154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.252178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.252195 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.355332 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.355403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.355425 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.355453 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.355474 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.458768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.458827 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.458844 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.458867 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.458885 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.562599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.562676 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.562699 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.562729 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.562752 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.665953 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.666029 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.666046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.666111 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.666129 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.768884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.769842 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.770008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.770208 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.770468 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.873726 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.873784 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.873797 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.873821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.873835 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.977103 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.977155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.977171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.977196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4811]: I1203 00:06:46.977213 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.080612 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.080678 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.080692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.080722 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.080737 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.114342 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:47 crc kubenswrapper[4811]: E1203 00:06:47.114483 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.128181 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.138391 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.146150 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.158564 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.170845 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.183250 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.183519 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.183652 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.183741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.183830 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.187611 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.207739 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.219639 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.229878 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.241455 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.254296 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.272503 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1203 00:06:42.023793 6430 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1203 00:06:42.023724 6430 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 00:06:42.023831 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.282585 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.286864 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.287115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.287235 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.287351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.287432 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.295882 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.307572 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.319101 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.330896 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.343130 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.352581 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.389819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.389853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.389863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.389878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.389890 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.492483 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.492542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.492554 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.492574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.492587 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.596525 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.596585 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.596594 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.596612 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.596641 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.699364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.699436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.699455 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.699481 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.699505 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.802651 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.802712 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.802734 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.802762 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.802786 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.905951 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.906024 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.906048 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.906079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4811]: I1203 00:06:47.906102 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.008952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.009018 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.009034 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.009474 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.009528 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.112234 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.112300 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.112310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.112328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.112339 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.114075 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.114124 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.114203 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:48 crc kubenswrapper[4811]: E1203 00:06:48.114213 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:48 crc kubenswrapper[4811]: E1203 00:06:48.114334 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:48 crc kubenswrapper[4811]: E1203 00:06:48.114408 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.215202 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.215234 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.215242 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.215283 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.215302 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.320250 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.320771 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.321377 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.321878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.322421 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.425832 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.425901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.425914 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.425943 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.425957 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.529151 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.529192 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.529203 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.529234 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.529246 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.632068 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.632146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.632171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.632206 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.632236 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.735810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.735870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.735884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.735910 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.735926 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.849731 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.849787 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.849802 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.849824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.849839 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.952179 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.952692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.952713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.952742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4811]: I1203 00:06:48.952765 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.043315 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.043455 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs podName:ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.043421254 +0000 UTC m=+65.185250776 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs") pod "network-metrics-daemon-5w9pv" (UID: "ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.043966 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs\") pod \"network-metrics-daemon-5w9pv\" (UID: \"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\") " pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.056600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.056660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.056671 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.056698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.056717 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.114379 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.114612 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.160286 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.160353 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.160368 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.160390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.160407 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.263716 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.263749 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.263761 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.263776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.263786 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.366931 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.366984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.366994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.367010 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.367020 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.470461 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.470515 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.470532 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.470557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.470574 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.573614 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.573693 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.573713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.573741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.573759 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.677622 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.677687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.677708 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.677744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.677780 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.781076 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.781132 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.781144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.781161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.781183 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.883702 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.883772 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.883791 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.883824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.883845 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.953364 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.953560 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.953672 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:07:21.953635423 +0000 UTC m=+82.095464905 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.953806 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.953898 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.953957 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.954048 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.954101 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.954151 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:21.954125327 +0000 UTC m=+82.095954839 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.954184 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:21.954166388 +0000 UTC m=+82.095995900 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.954344 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.954369 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.954383 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.954441 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:21.954430004 +0000 UTC m=+82.096259486 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.954618 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.954655 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.954675 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:49 crc kubenswrapper[4811]: E1203 00:06:49.954730 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:21.954713732 +0000 UTC m=+82.096543244 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.987359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.987428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.987451 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.987478 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4811]: I1203 00:06:49.987499 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.090929 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.090999 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.091018 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.091045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.091063 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.114531 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.114554 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.114604 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:50 crc kubenswrapper[4811]: E1203 00:06:50.114839 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:50 crc kubenswrapper[4811]: E1203 00:06:50.115229 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:50 crc kubenswrapper[4811]: E1203 00:06:50.115075 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.143531 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.162207 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.178187 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.193547 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9f680f8-059a-4334-afc8-226f41dbf18c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2631b0da901ad6d3813ac0e4eefb7ddb376e9bca75fb6737cc154e9336bea38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662be78be83c4fc0261e0810b70e37365749e1ef960db2bf94ec025e90ca96fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://265e4edcc98daf63d66695692e65ca0749f6383ff716dc04b1e4f283d437f640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.194738 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.194777 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.194789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.194808 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.194821 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.214992 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.234589 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.251079 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.265621 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.276043 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.298332 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.298364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.298373 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.298390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.298401 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.302525 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.315237 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.329065 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.340454 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.355228 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.371584 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.394751 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1203 00:06:42.023793 6430 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1203 00:06:42.023724 6430 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 00:06:42.023831 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.400371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.400480 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.400582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.400678 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.400852 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.408351 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.423542 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.503708 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.504057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.504189 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.504336 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.504472 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.607310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.607367 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.607380 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.607403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.607418 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.711033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.711127 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.711156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.711190 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.711213 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.815297 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.815360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.815383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.815411 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.815433 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.918601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.918677 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.918703 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.918734 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4811]: I1203 00:06:50.918756 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.022016 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.022094 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.022109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.022131 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.022149 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.113980 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:51 crc kubenswrapper[4811]: E1203 00:06:51.114196 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.125022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.129782 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.130047 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.130214 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.130254 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.233583 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.233634 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.233645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.233664 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.233676 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.337157 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.337773 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.338007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.338224 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.338460 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.441808 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.441906 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.441925 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.441955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.441979 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.544451 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.544512 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.544524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.544544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.544561 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.647346 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.647445 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.647467 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.647497 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.647515 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.750102 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.750146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.750158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.750180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.750194 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.853000 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.853042 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.853050 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.853066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.853075 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.956398 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.956445 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.956454 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.956469 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4811]: I1203 00:06:51.956479 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.081509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.081915 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.082387 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.082745 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.082921 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.115616 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.115705 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.115785 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:52 crc kubenswrapper[4811]: E1203 00:06:52.115824 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:52 crc kubenswrapper[4811]: E1203 00:06:52.115958 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:52 crc kubenswrapper[4811]: E1203 00:06:52.116050 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.186201 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.186286 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.186299 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.186323 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.186346 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.289193 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.289299 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.289315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.289342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.289356 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.290981 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.291123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.291219 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.291352 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.291448 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4811]: E1203 00:06:52.312034 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.317899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.317950 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.317962 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.317979 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.317991 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4811]: E1203 00:06:52.338779 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.345111 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.345171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.345187 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.345213 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.345230 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4811]: E1203 00:06:52.364591 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.368659 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.368716 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.368728 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.368744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.368757 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4811]: E1203 00:06:52.386198 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.390295 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.390342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.390357 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.390405 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.390424 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4811]: E1203 00:06:52.406211 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:52 crc kubenswrapper[4811]: E1203 00:06:52.406417 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.408326 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.408366 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.408383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.408438 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.408454 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.511679 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.511742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.511765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.511793 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.511813 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.614976 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.615030 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.615046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.615094 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.615112 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.718148 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.718191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.718206 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.718226 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.718240 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.822329 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.822431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.822455 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.822485 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.822508 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.924590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.924640 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.924655 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.924675 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4811]: I1203 00:06:52.924689 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.028019 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.029104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.029377 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.029578 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.029769 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.114369 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:53 crc kubenswrapper[4811]: E1203 00:06:53.114616 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.133196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.133253 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.133296 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.133314 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.133326 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.236835 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.237313 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.237478 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.237644 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.237772 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.341180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.341234 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.341242 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.341282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.341294 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.444633 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.444974 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.444990 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.445004 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.445015 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.548028 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.548123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.548144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.548181 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.548203 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.651602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.651881 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.651975 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.652071 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.652222 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.756452 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.756526 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.756544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.756576 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.756594 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.859824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.859867 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.859879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.859899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.859911 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.963378 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.963421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.963432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.963453 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4811]: I1203 00:06:53.963478 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.066180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.066292 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.066311 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.066334 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.066346 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.114242 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.114361 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:54 crc kubenswrapper[4811]: E1203 00:06:54.114583 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.114631 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:54 crc kubenswrapper[4811]: E1203 00:06:54.114808 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:54 crc kubenswrapper[4811]: E1203 00:06:54.114958 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.170038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.170073 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.170083 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.170101 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.170112 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.273273 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.273306 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.273342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.273358 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.273368 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.376472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.376531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.376541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.376558 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.376569 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.481687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.481731 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.481744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.481766 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.481780 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.584245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.584335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.584354 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.584378 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.584395 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.687499 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.687556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.687567 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.687589 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.687602 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.790399 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.790448 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.790459 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.790477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.790524 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.893747 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.893848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.893867 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.893916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.893936 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.996767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.996824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.996850 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.996865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4811]: I1203 00:06:54.996875 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.099924 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.099996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.100008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.100025 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.100037 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.114244 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:55 crc kubenswrapper[4811]: E1203 00:06:55.114386 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.201973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.202035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.202054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.202081 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.202099 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.305698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.305797 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.305816 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.305840 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.305857 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.410165 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.410335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.410359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.410426 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.410450 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.513677 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.513718 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.513727 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.513744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.513754 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.616697 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.616742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.616754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.616770 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.616781 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.720589 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.720655 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.720676 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.720698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.720710 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.824686 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.824745 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.824753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.824776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.824789 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.927715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.927790 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.927806 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.927828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4811]: I1203 00:06:55.927843 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.031278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.031370 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.031397 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.031606 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.031627 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.114376 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.114439 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:56 crc kubenswrapper[4811]: E1203 00:06:56.114597 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:56 crc kubenswrapper[4811]: E1203 00:06:56.114709 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.114750 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:56 crc kubenswrapper[4811]: E1203 00:06:56.115235 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.116716 4811 scope.go:117] "RemoveContainer" containerID="87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359" Dec 03 00:06:56 crc kubenswrapper[4811]: E1203 00:06:56.117052 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.135098 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.135180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.135197 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.135246 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.135292 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.239053 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.239108 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.239121 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.239144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.239158 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.342370 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.342477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.342489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.342508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.342519 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.446240 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.446309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.446322 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.446350 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.446363 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.548663 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.548715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.548729 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.548752 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.548765 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.651411 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.651487 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.651501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.651523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.651537 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.754304 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.754376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.754394 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.754425 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.754444 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.857833 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.857898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.857934 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.857973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.857998 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.961724 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.961793 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.961811 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.961838 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4811]: I1203 00:06:56.961856 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.065140 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.065191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.065213 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.065246 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.065312 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.114818 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:57 crc kubenswrapper[4811]: E1203 00:06:57.115022 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.167540 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.167781 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.167831 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.167860 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.167873 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.272805 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.272849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.272862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.272882 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.272893 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.376328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.376377 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.376389 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.376411 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.376424 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.480281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.480345 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.480364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.480389 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.480403 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.583635 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.583705 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.583725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.583754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.583773 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.687662 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.687742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.687761 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.687792 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.687811 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.790851 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.790909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.790925 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.790947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.790964 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.893785 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.893821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.893833 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.893849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.893859 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.996965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.997006 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.997015 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.997031 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4811]: I1203 00:06:57.997040 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.100401 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.100473 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.100493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.100526 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.100545 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.114984 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.115085 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.115106 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:58 crc kubenswrapper[4811]: E1203 00:06:58.115226 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:58 crc kubenswrapper[4811]: E1203 00:06:58.115384 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:58 crc kubenswrapper[4811]: E1203 00:06:58.115599 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.204798 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.204862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.204885 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.204916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.204938 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.309104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.309652 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.309786 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.309971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.310148 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.414249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.414354 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.414385 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.414420 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.414440 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.517143 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.517196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.517213 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.517239 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.517256 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.621478 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.621558 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.621581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.621611 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.621632 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.723921 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.723972 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.723984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.724002 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.724013 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.826594 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.827007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.827071 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.827296 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.827365 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.930591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.930666 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.930679 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.930698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4811]: I1203 00:06:58.930710 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.034201 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.034240 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.034249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.034289 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.034307 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.114801 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:06:59 crc kubenswrapper[4811]: E1203 00:06:59.115076 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.137781 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.137815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.137826 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.137870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.137885 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.241326 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.241366 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.241377 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.241396 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.241407 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.344345 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.344711 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.344849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.344999 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.345142 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.448084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.448142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.448157 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.448175 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.448190 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.550545 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.550625 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.550648 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.550674 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.550696 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.655621 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.655680 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.655697 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.655722 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.655740 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.759014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.759455 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.759675 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.760071 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.760327 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.863723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.864072 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.864245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.864519 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.864677 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.968386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.968721 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.968860 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.969027 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4811]: I1203 00:06:59.969173 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.072449 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.073127 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.073337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.073498 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.073723 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.114339 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.114489 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:00 crc kubenswrapper[4811]: E1203 00:07:00.114607 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.114656 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:00 crc kubenswrapper[4811]: E1203 00:07:00.114914 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:00 crc kubenswrapper[4811]: E1203 00:07:00.115063 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.157930 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.172911 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.179411 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.179472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.179486 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.179507 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.179525 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.187529 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.203788 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.229595 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.260163 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1203 00:06:42.023793 6430 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1203 00:06:42.023724 6430 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 00:06:42.023831 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.275149 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.283231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.283458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.283586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.283754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.283888 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.290103 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.310027 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.328895 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.345737 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.369943 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9f680f8-059a-4334-afc8-226f41dbf18c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2631b0da901ad6d3813ac0e4eefb7ddb376e9bca75fb6737cc154e9336bea38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662be78be83c4fc0261e0810b70e37365749e1ef960db2bf94ec025e90ca96fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://265e4edcc98daf63d66695692e65ca0749f6383ff716dc04b1e4f283d437f640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.386330 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.386647 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.386694 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.386713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.386739 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.386757 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.400089 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.416986 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.432332 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.450159 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.466733 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.490106 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.490147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.490156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.490174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.490184 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.593874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.593947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.593961 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.593984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.593997 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.697503 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.697573 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.697588 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.697612 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.697628 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.800436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.800492 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.800501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.800521 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.800532 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.904406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.904473 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.904490 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.904514 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4811]: I1203 00:07:00.904530 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.008388 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.008432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.008443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.008459 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.008468 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.111368 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.111417 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.111426 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.111442 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.111451 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.114629 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:01 crc kubenswrapper[4811]: E1203 00:07:01.114875 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.215176 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.215313 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.215340 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.215370 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.215394 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.319352 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.319524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.319546 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.319624 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.319645 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.423352 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.423436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.423480 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.423515 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.423539 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.526226 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.526369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.526390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.526419 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.526440 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.629817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.629874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.629886 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.629904 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.629915 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.733421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.733556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.733584 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.733627 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.733651 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.835834 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.835880 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.835893 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.835916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.835930 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.939320 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.939377 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.939404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.939425 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4811]: I1203 00:07:01.939438 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.042869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.042944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.042955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.042975 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.042987 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.114369 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.114510 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:02 crc kubenswrapper[4811]: E1203 00:07:02.114618 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:02 crc kubenswrapper[4811]: E1203 00:07:02.114702 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.114770 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:02 crc kubenswrapper[4811]: E1203 00:07:02.114978 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.145199 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.145290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.145310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.145337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.145354 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.247755 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.247799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.247810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.247827 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.247840 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.350717 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.350879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.350934 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.350953 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.350965 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.446955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.447026 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.447052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.447080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.447102 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4811]: E1203 00:07:02.469235 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.475534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.475617 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.475698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.475736 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.476210 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4811]: E1203 00:07:02.499889 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.503938 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.503973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.503986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.504005 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.504016 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4811]: E1203 00:07:02.520099 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.526008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.526068 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.526080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.526102 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.526113 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4811]: E1203 00:07:02.540155 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.544900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.544972 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.544984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.545007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.545021 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4811]: E1203 00:07:02.557560 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4811]: E1203 00:07:02.557701 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.559831 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.559868 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.559880 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.559899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.559910 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.662155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.662203 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.662211 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.662237 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.662248 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.766577 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.766645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.766657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.766680 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.766695 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.870142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.870207 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.870225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.870373 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.870411 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.973778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.973828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.973931 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.973957 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4811]: I1203 00:07:02.973968 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.077524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.077569 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.077582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.077601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.077614 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.114933 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:03 crc kubenswrapper[4811]: E1203 00:07:03.115196 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.128859 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.181617 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.181680 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.181698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.181722 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.181737 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.288408 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.288471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.288490 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.288515 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.288534 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.392314 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.392383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.392402 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.392425 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.392441 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.495490 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.495579 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.495602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.495640 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.495665 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.597460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.597499 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.597509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.597523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.597534 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.701530 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.701587 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.701602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.701629 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.701646 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.804709 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.804754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.804763 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.804780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.804789 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.908046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.908108 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.908122 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.908146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4811]: I1203 00:07:03.908165 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.011181 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.011241 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.011256 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.011298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.011311 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.113845 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.113885 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.113898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.113918 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.113933 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.114008 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.114014 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:04 crc kubenswrapper[4811]: E1203 00:07:04.114139 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.114200 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:04 crc kubenswrapper[4811]: E1203 00:07:04.114250 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:04 crc kubenswrapper[4811]: E1203 00:07:04.114330 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.216574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.216648 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.216661 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.216686 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.216702 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.319126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.319161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.319169 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.319186 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.319197 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.422700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.422782 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.422989 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.423069 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.423093 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.526246 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.526327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.526350 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.526373 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.526386 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.629281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.629317 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.629328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.629345 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.629356 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.732752 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.733472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.733594 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.733699 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.734008 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.837897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.838430 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.838654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.839015 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.839386 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.942672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.942973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.943124 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.943232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4811]: I1203 00:07:04.943351 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.046075 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.046379 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.046464 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.046542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.046622 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.114719 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:05 crc kubenswrapper[4811]: E1203 00:07:05.114917 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.131115 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs\") pod \"network-metrics-daemon-5w9pv\" (UID: \"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\") " pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:05 crc kubenswrapper[4811]: E1203 00:07:05.131373 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:05 crc kubenswrapper[4811]: E1203 00:07:05.131457 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs podName:ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c nodeName:}" failed. No retries permitted until 2025-12-03 00:07:37.131429249 +0000 UTC m=+97.273258721 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs") pod "network-metrics-daemon-5w9pv" (UID: "ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.149091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.149302 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.149381 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.149482 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.149565 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.253600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.253661 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.253682 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.253714 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.253736 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.356065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.356684 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.357059 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.357182 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.359851 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.463118 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.463207 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.463232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.463293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.463315 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.566001 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.566346 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.566440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.566528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.566598 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.577624 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c998b_06cb0758-b33b-4730-a341-cc78a072aa5f/kube-multus/0.log" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.577765 4811 generic.go:334] "Generic (PLEG): container finished" podID="06cb0758-b33b-4730-a341-cc78a072aa5f" containerID="d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c" exitCode=1 Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.577828 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c998b" event={"ID":"06cb0758-b33b-4730-a341-cc78a072aa5f","Type":"ContainerDied","Data":"d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c"} Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.578584 4811 scope.go:117] "RemoveContainer" containerID="d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.609207 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.620134 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.634982 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.654752 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1203 00:06:42.023793 6430 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1203 00:06:42.023724 6430 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 00:06:42.023831 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.665871 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.669948 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.670014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.670036 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.670063 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.670083 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.678838 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.696887 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.711344 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.726197 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.750933 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:04Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d\\\\n2025-12-03T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d to /host/opt/cni/bin/\\\\n2025-12-03T00:06:19Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:19Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.773536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.773585 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.773599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.773620 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.773633 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.801682 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9f680f8-059a-4334-afc8-226f41dbf18c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2631b0da901ad6d3813ac0e4eefb7ddb376e9bca75fb6737cc154e9336bea38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662be78be83c4fc0261e0810b70e37365749e1ef960db2bf94ec025e90ca96fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://265e4edcc98daf63d66695692e65ca0749f6383ff716dc04b1e4f283d437f640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.831110 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.842989 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.855113 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.867389 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbda0eda-b987-4be5-bf1c-a4541a9270bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2c6f29b65991f85990666d9b5ac4d86ba58d8248ab611dc49bd6ea44808a4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.876172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.876224 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.876234 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.876249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.876284 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.887923 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.902214 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.915040 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.925903 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:05Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.979634 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.979706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.979720 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.979743 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4811]: I1203 00:07:05.979801 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.082444 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.082499 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.082508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.082531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.082542 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.117199 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:06 crc kubenswrapper[4811]: E1203 00:07:06.117373 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.117579 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:06 crc kubenswrapper[4811]: E1203 00:07:06.117626 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.117740 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:06 crc kubenswrapper[4811]: E1203 00:07:06.117785 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.185932 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.185980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.185992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.186011 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.186023 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.289062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.289112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.289123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.289146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.289160 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.391615 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.391691 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.391706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.391725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.391736 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.494909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.494942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.494957 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.494974 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.494984 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.587702 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c998b_06cb0758-b33b-4730-a341-cc78a072aa5f/kube-multus/0.log" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.587882 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c998b" event={"ID":"06cb0758-b33b-4730-a341-cc78a072aa5f","Type":"ContainerStarted","Data":"738df3ae5a86e625d062467d9b8983242ee4336ebd5182288c1de1774add1b8f"} Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.597804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.597857 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.597872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.597893 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.597908 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.605215 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9f680f8-059a-4334-afc8-226f41dbf18c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2631b0da901ad6d3813ac0e4eefb7ddb376e9bca75fb6737cc154e9336bea38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662be78be83c4fc0261e0810b70e37365749e1ef960db2bf94ec025e90ca96fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://265e4edcc98daf63d66695692e65ca0749f6383ff716dc04b1e4f283d437f640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.622177 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.634301 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.646471 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.658889 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbda0eda-b987-4be5-bf1c-a4541a9270bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2c6f29b65991f85990666d9b5ac4d86ba58d8248ab611dc49bd6ea44808a4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.676352 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.689505 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.700528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.700582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.700593 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.700610 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.700654 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.703301 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.720148 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.745443 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.760541 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.777698 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.799066 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1203 00:06:42.023793 6430 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1203 00:06:42.023724 6430 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 00:06:42.023831 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.802864 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.802886 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.802894 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.802911 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.802921 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.812827 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.828878 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.843285 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.858880 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.873599 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.890603 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738df3ae5a86e625d062467d9b8983242ee4336ebd5182288c1de1774add1b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:04Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d\\\\n2025-12-03T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d to /host/opt/cni/bin/\\\\n2025-12-03T00:06:19Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:19Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.906170 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.906220 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.906231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.906251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4811]: I1203 00:07:06.906278 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.009627 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.009675 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.009687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.009707 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.009776 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.112523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.112568 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.112578 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.112596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.112608 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.114783 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:07 crc kubenswrapper[4811]: E1203 00:07:07.114908 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.215819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.215873 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.215888 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.215911 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.215925 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.319229 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.319297 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.319310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.319329 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.319344 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.422168 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.422215 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.422225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.422245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.422273 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.525315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.525367 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.525377 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.525398 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.525410 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.630727 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.630807 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.630831 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.631487 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.631565 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.734823 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.734944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.734970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.735000 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.735022 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.837764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.837832 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.837854 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.837880 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.837900 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.941205 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.941312 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.941340 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.941371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4811]: I1203 00:07:07.941395 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.045076 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.045149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.045171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.045200 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.045230 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.114072 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:08 crc kubenswrapper[4811]: E1203 00:07:08.114482 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.114645 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:08 crc kubenswrapper[4811]: E1203 00:07:08.114800 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.115020 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:08 crc kubenswrapper[4811]: E1203 00:07:08.115511 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.147919 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.147973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.147990 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.148014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.148032 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.250891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.250942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.250958 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.250982 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.250999 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.353939 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.353987 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.353998 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.354022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.354036 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.456579 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.456648 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.456667 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.456698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.456718 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.559001 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.559303 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.559438 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.559511 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.559586 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.662424 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.662482 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.662500 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.662527 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.662543 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.765647 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.765730 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.765754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.765783 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.765795 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.869282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.869348 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.869364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.869389 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.869413 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.972167 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.972487 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.972623 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.972725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4811]: I1203 00:07:08.972825 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.075541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.075602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.075625 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.075649 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.075665 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.114249 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:09 crc kubenswrapper[4811]: E1203 00:07:09.114760 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.178641 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.178706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.178725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.178750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.178771 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.281508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.281921 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.282006 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.282117 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.282222 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.385714 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.385772 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.385786 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.385810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.385827 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.488909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.488948 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.488963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.488979 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.488989 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.591499 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.591536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.591544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.591558 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.591568 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.693848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.693897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.693912 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.693930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.693940 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.797202 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.797315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.797371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.797397 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.797411 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.900249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.900312 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.900324 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.900344 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4811]: I1203 00:07:09.900357 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.002948 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.002994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.003004 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.003025 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.003036 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.105523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.105559 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.105569 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.105587 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.105599 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.114924 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.114956 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.114935 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:10 crc kubenswrapper[4811]: E1203 00:07:10.115087 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:10 crc kubenswrapper[4811]: E1203 00:07:10.115197 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:10 crc kubenswrapper[4811]: E1203 00:07:10.115339 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.129321 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.144979 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.159370 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.176921 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738df3ae5a86e625d062467d9b8983242ee4336ebd5182288c1de1774add1b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:04Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d\\\\n2025-12-03T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d to /host/opt/cni/bin/\\\\n2025-12-03T00:06:19Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:19Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.195504 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.207270 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.207359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.207370 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.207387 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.207402 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.216438 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1203 00:06:42.023793 6430 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1203 00:06:42.023724 6430 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 00:06:42.023831 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.227138 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.241069 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.257030 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.278030 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.289958 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.302891 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9f680f8-059a-4334-afc8-226f41dbf18c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2631b0da901ad6d3813ac0e4eefb7ddb376e9bca75fb6737cc154e9336bea38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662be78be83c4fc0261e0810b70e37365749e1ef960db2bf94ec025e90ca96fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://265e4edcc98daf63d66695692e65ca0749f6383ff716dc04b1e4f283d437f640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.310999 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.311043 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.311057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.311081 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.311099 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.319093 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.332842 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.346177 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.357820 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.367408 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbda0eda-b987-4be5-bf1c-a4541a9270bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2c6f29b65991f85990666d9b5ac4d86ba58d8248ab611dc49bd6ea44808a4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.378482 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.400859 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.413426 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.413460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.413476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.413493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.413503 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.515883 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.515938 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.515950 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.515970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.515981 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.618273 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.618313 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.618323 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.618342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.618353 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.721084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.721135 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.721147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.721165 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.721204 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.823710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.823801 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.823812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.823830 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.823839 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.926517 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.926558 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.926570 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.926591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4811]: I1203 00:07:10.926605 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.029484 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.029533 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.029541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.029556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.029565 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.114209 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:11 crc kubenswrapper[4811]: E1203 00:07:11.114453 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.115543 4811 scope.go:117] "RemoveContainer" containerID="87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.133032 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.133083 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.133102 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.133123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.133141 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.236081 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.236859 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.236946 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.237024 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.237131 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.339829 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.339861 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.339872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.339884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.339893 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.442912 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.442957 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.442973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.442997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.443015 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.545368 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.545441 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.545462 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.545491 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.545514 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.609836 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/2.log" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.616720 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerStarted","Data":"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96"} Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.618097 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.630438 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbda0eda-b987-4be5-bf1c-a4541a9270bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2c6f29b65991f85990666d9b5ac4d86ba58d8248ab611dc49bd6ea44808a4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.648404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.648458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.648472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.648498 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.648513 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.651483 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.664588 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.678980 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.692049 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.711500 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.724016 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.745481 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1203 00:06:42.023793 6430 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1203 00:06:42.023724 6430 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 00:06:42.023831 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.750969 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.751005 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.751014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.751033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.751045 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.759045 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.773867 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.787088 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.802015 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.818855 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.835950 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738df3ae5a86e625d062467d9b8983242ee4336ebd5182288c1de1774add1b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:04Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d\\\\n2025-12-03T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d to /host/opt/cni/bin/\\\\n2025-12-03T00:06:19Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:19Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.854418 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.854496 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.854506 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.854526 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.854541 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.854622 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.869040 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9f680f8-059a-4334-afc8-226f41dbf18c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2631b0da901ad6d3813ac0e4eefb7ddb376e9bca75fb6737cc154e9336bea38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662be78be83c4fc0261e0810b70e37365749e1ef960db2bf94ec025e90ca96fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://265e4edcc98daf63d66695692e65ca0749f6383ff716dc04b1e4f283d437f640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.887953 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.905459 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.929312 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.957501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.957557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.957568 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.957591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4811]: I1203 00:07:11.957605 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.103925 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.103986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.103997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.104019 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.104031 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.114779 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.114837 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.114776 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:12 crc kubenswrapper[4811]: E1203 00:07:12.114904 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:12 crc kubenswrapper[4811]: E1203 00:07:12.115065 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:12 crc kubenswrapper[4811]: E1203 00:07:12.115143 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.206681 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.206723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.206734 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.206748 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.206757 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.309534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.309578 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.309587 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.309602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.309611 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.412136 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.412192 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.412206 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.412224 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.412296 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.514879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.514931 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.514941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.514964 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.514975 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.617502 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.617546 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.617556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.617574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.617585 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.620604 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/3.log" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.621473 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/2.log" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.624068 4811 generic.go:334] "Generic (PLEG): container finished" podID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerID="46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96" exitCode=1 Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.624110 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerDied","Data":"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96"} Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.624145 4811 scope.go:117] "RemoveContainer" containerID="87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.629436 4811 scope.go:117] "RemoveContainer" containerID="46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96" Dec 03 00:07:12 crc kubenswrapper[4811]: E1203 00:07:12.630035 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.649542 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.661649 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.669712 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.669735 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.669742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.669756 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.669765 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.680603 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: E1203 00:07:12.683144 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.686686 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.686725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.686734 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.686747 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.686756 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.693793 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: E1203 00:07:12.698989 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.704209 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.704309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.704336 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.704365 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.704385 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.710346 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738df3ae5a86e625d062467d9b8983242ee4336ebd5182288c1de1774add1b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:04Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d\\\\n2025-12-03T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d to /host/opt/cni/bin/\\\\n2025-12-03T00:06:19Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:19Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: E1203 00:07:12.719323 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.723116 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.723138 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.723147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.723162 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.723171 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.724566 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: E1203 00:07:12.742092 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.745538 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.745576 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.745647 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.745668 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.745690 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.748116 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87320f5a8aa4547077a186ff8dc6bbfbd02b024fc3aba87dcbc15e3b687d7359\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"message\\\":\\\"w object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1203 00:06:42.023793 6430 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1203 00:06:42.023724 6430 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dcc9e-c16a-4962-a6d2-4adeb0b929c4}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 00:06:42.023831 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"2.414905 6786 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1203 00:07:12.414904 6786 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI1203 00:07:12.414922 6786 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1203 00:07:12.414929 6786 services_controller.go:360] Finished syncing service package-server-manager-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.424932ms\\\\nI1203 00:07:12.414934 6786 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1203 00:07:12.414930 6786 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-5w9pv before timer (time: 2025-12-03 00:07:13.61622107 +0000 UTC m=+1.917385812): skip\\\\nI1203 00:07:12.414853 6786 services_controller.go:445] Built service openshift-cluster-version/cluster-version-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1203 00:07:12.414946 6786 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1203 00:07:12.414947 6786 obj_retry.go\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.761854 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: E1203 00:07:12.762017 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: E1203 00:07:12.762242 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.765014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.765089 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.765105 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.765125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.765139 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.775248 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.788099 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.800275 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.813376 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.825278 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9f680f8-059a-4334-afc8-226f41dbf18c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2631b0da901ad6d3813ac0e4eefb7ddb376e9bca75fb6737cc154e9336bea38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662be78be83c4fc0261e0810b70e37365749e1ef960db2bf94ec025e90ca96fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://265e4edcc98daf63d66695692e65ca0749f6383ff716dc04b1e4f283d437f640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.839443 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.850729 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.864746 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.868212 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.868285 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.868298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.868319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.868331 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.878012 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.892155 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbda0eda-b987-4be5-bf1c-a4541a9270bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2c6f29b65991f85990666d9b5ac4d86ba58d8248ab611dc49bd6ea44808a4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.905516 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:12Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.971703 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.971756 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.971767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.971784 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4811]: I1203 00:07:12.971799 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.075827 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.075876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.075885 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.075908 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.075919 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.114545 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:13 crc kubenswrapper[4811]: E1203 00:07:13.114768 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.179047 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.179096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.179106 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.179126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.179137 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.287885 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.287961 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.287971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.287997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.288014 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.391046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.391105 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.391121 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.391144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.391161 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.494341 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.494431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.494458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.494491 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.494518 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.598035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.598103 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.598121 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.598147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.598164 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.629861 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/3.log" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.633847 4811 scope.go:117] "RemoveContainer" containerID="46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96" Dec 03 00:07:13 crc kubenswrapper[4811]: E1203 00:07:13.634015 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.647977 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbda0eda-b987-4be5-bf1c-a4541a9270bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2c6f29b65991f85990666d9b5ac4d86ba58d8248ab611dc49bd6ea44808a4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.661090 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.672719 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.693367 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.701524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.701567 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.701581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.701601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.701617 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.706903 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.725538 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.734456 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.745431 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.756219 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.768724 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.779966 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.790772 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738df3ae5a86e625d062467d9b8983242ee4336ebd5182288c1de1774add1b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:04Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d\\\\n2025-12-03T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d to /host/opt/cni/bin/\\\\n2025-12-03T00:06:19Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:19Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.803671 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.803993 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.804011 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.804022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.804040 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.804051 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.822805 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"2.414905 6786 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1203 00:07:12.414904 6786 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI1203 00:07:12.414922 6786 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1203 00:07:12.414929 6786 services_controller.go:360] Finished syncing service package-server-manager-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.424932ms\\\\nI1203 00:07:12.414934 6786 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1203 00:07:12.414930 6786 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-5w9pv before timer (time: 2025-12-03 00:07:13.61622107 +0000 UTC m=+1.917385812): skip\\\\nI1203 00:07:12.414853 6786 services_controller.go:445] Built service openshift-cluster-version/cluster-version-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1203 00:07:12.414946 6786 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1203 00:07:12.414947 6786 obj_retry.go\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.831074 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.840204 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9f680f8-059a-4334-afc8-226f41dbf18c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2631b0da901ad6d3813ac0e4eefb7ddb376e9bca75fb6737cc154e9336bea38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662be78be83c4fc0261e0810b70e37365749e1ef960db2bf94ec025e90ca96fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://265e4edcc98daf63d66695692e65ca0749f6383ff716dc04b1e4f283d437f640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.851438 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.859768 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.869128 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:13Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.905905 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.905940 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.905952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.905970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4811]: I1203 00:07:13.905982 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.008557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.008604 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.008615 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.008632 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.008643 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.111153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.111187 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.111195 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.111210 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.111220 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.114760 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.114778 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:14 crc kubenswrapper[4811]: E1203 00:07:14.114846 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.114862 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:14 crc kubenswrapper[4811]: E1203 00:07:14.114953 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:14 crc kubenswrapper[4811]: E1203 00:07:14.115026 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.214159 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.214203 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.214217 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.214236 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.214250 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.316367 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.316418 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.316432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.316451 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.316467 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.418695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.418755 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.418774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.418798 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.418817 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.522529 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.522588 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.522600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.522620 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.522633 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.626072 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.626227 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.626307 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.626345 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.626367 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.729538 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.729584 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.729593 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.729607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.729616 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.832713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.832759 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.832768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.832786 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.832796 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.934916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.934963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.934975 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.934990 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4811]: I1203 00:07:14.935001 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.037292 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.037337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.037454 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.037471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.037481 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.114277 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:15 crc kubenswrapper[4811]: E1203 00:07:15.114479 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.140498 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.140537 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.140548 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.140566 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.140577 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.243314 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.243362 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.243374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.243390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.243399 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.347108 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.347170 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.347187 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.347212 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.347228 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.461347 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.461419 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.461440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.461464 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.461478 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.565359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.565439 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.565459 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.565493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.565514 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.669111 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.669179 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.669198 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.669224 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.669244 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.771942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.772013 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.772031 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.772058 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.772076 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.876024 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.876498 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.876642 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.876776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.876886 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.980475 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.980965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.981180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.981371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4811]: I1203 00:07:15.981536 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.085339 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.085816 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.086029 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.086338 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.086570 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.114668 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.114669 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:16 crc kubenswrapper[4811]: E1203 00:07:16.114818 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.114901 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:16 crc kubenswrapper[4811]: E1203 00:07:16.115100 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:16 crc kubenswrapper[4811]: E1203 00:07:16.115186 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.189046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.189118 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.189128 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.189147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.189162 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.292882 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.292967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.293004 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.293032 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.293054 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.396455 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.396524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.396548 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.396582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.396607 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.499547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.499594 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.499608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.499627 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.499639 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.602196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.602298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.602318 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.602342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.602359 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.705718 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.705777 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.705791 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.705809 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.705823 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.808894 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.808934 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.808944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.808960 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.808972 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.911625 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.911696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.911720 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.911741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4811]: I1203 00:07:16.911753 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.014849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.014921 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.014948 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.014981 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.015009 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.114622 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:17 crc kubenswrapper[4811]: E1203 00:07:17.114804 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.117708 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.117736 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.117745 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.117757 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.117766 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.221932 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.221984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.221996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.222021 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.222035 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.324989 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.325035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.325048 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.325067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.325078 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.428432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.428513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.428547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.428569 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.428579 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.532076 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.532119 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.532131 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.532174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.532190 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.635485 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.635547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.635568 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.635591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.635609 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.738952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.739012 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.739023 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.739041 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.739053 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.841960 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.841997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.842009 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.842024 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.842037 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.945427 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.945492 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.945500 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.945518 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4811]: I1203 00:07:17.945530 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.055536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.056011 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.056251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.056554 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.056820 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.115563 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.115687 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.115809 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:18 crc kubenswrapper[4811]: E1203 00:07:18.115931 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:18 crc kubenswrapper[4811]: E1203 00:07:18.116443 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:18 crc kubenswrapper[4811]: E1203 00:07:18.116572 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.160238 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.160590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.160806 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.160971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.161163 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.264383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.264448 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.264462 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.264486 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.264502 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.368242 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.368358 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.368378 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.368406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.368425 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.472123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.472176 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.472201 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.472229 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.472248 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.575737 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.575881 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.575902 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.575927 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.575944 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.678765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.678841 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.678860 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.678889 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.678906 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.781873 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.781920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.781933 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.781950 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.781962 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.885528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.885604 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.885622 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.885651 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.885670 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.989026 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.989109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.989132 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.989158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4811]: I1203 00:07:18.989178 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.091999 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.092048 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.092057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.092074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.092085 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.114724 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:19 crc kubenswrapper[4811]: E1203 00:07:19.114936 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.194827 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.194870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.194881 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.194898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.194910 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.297822 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.298289 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.298416 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.298562 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.298701 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.401426 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.401506 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.401520 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.401544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.401559 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.504390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.504458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.504473 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.504492 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.504504 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.608159 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.608219 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.608230 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.608249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.608291 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.711907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.711980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.712004 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.712032 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.712049 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.814235 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.814296 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.814311 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.814330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.814344 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.917572 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.917613 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.917625 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.917643 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4811]: I1203 00:07:19.917655 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.020199 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.020286 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.020298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.020324 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.020338 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.114881 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.114945 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.114882 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:20 crc kubenswrapper[4811]: E1203 00:07:20.115031 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:20 crc kubenswrapper[4811]: E1203 00:07:20.115183 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:20 crc kubenswrapper[4811]: E1203 00:07:20.115332 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.125524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.125585 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.125604 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.125629 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.125647 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.140022 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.157203 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.170312 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.182334 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbda0eda-b987-4be5-bf1c-a4541a9270bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2c6f29b65991f85990666d9b5ac4d86ba58d8248ab611dc49bd6ea44808a4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.194217 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.220217 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.228838 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.228884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.228899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.228918 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.228932 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.234421 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.251809 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.270404 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.286818 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738df3ae5a86e625d062467d9b8983242ee4336ebd5182288c1de1774add1b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:04Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d\\\\n2025-12-03T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d to /host/opt/cni/bin/\\\\n2025-12-03T00:06:19Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:19Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.308529 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.331695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.331763 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.331782 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.331809 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.331830 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.339895 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"2.414905 6786 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1203 00:07:12.414904 6786 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI1203 00:07:12.414922 6786 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1203 00:07:12.414929 6786 services_controller.go:360] Finished syncing service package-server-manager-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.424932ms\\\\nI1203 00:07:12.414934 6786 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1203 00:07:12.414930 6786 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-5w9pv before timer (time: 2025-12-03 00:07:13.61622107 +0000 UTC m=+1.917385812): skip\\\\nI1203 00:07:12.414853 6786 services_controller.go:445] Built service openshift-cluster-version/cluster-version-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1203 00:07:12.414946 6786 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1203 00:07:12.414947 6786 obj_retry.go\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.352576 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.367302 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.383514 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.395599 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.410671 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.422795 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9f680f8-059a-4334-afc8-226f41dbf18c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2631b0da901ad6d3813ac0e4eefb7ddb376e9bca75fb6737cc154e9336bea38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662be78be83c4fc0261e0810b70e37365749e1ef960db2bf94ec025e90ca96fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://265e4edcc98daf63d66695692e65ca0749f6383ff716dc04b1e4f283d437f640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.434605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.434638 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.434647 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.434663 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.434676 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.439518 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:20Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.536633 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.536666 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.536676 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.536691 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.536702 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.639316 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.639373 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.639391 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.639414 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.639432 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.741534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.741590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.741606 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.741630 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.741647 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.845563 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.845625 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.845641 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.845666 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.845683 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.948689 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.948740 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.948756 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.948780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4811]: I1203 00:07:20.949088 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.052835 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.052889 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.052907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.052931 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.052950 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.114411 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:21 crc kubenswrapper[4811]: E1203 00:07:21.114629 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.157168 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.157585 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.157688 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.157840 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.157937 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.261562 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.261628 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.261647 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.261671 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.261689 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.365158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.365234 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.365291 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.365328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.365353 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.468673 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.468739 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.468765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.468793 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.468816 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.572374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.572422 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.572434 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.572453 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.572466 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.675115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.675554 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.675568 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.675589 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.675605 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.778013 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.778058 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.778125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.778148 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.778161 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.881687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.881768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.881786 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.881822 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.881843 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.985229 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.985340 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.985360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.985388 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4811]: I1203 00:07:21.985408 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.037248 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.037592 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.037650 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.037729 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:26.037667382 +0000 UTC m=+146.179496924 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.037806 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.037849 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.037880 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.037898 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.037931 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:26.037907348 +0000 UTC m=+146.179736830 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.037937 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.037975 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:26.03795313 +0000 UTC m=+146.179782622 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.038041 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.038232 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.038287 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.038302 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.038333 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.038361 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:26.0383478 +0000 UTC m=+146.180177442 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.038529 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:26.038493464 +0000 UTC m=+146.180322936 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.088425 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.088511 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.088532 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.088560 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.088579 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.114227 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.114226 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.114462 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.114378 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.114729 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.114847 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.191460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.191522 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.191569 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.191593 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.191629 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.294919 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.294981 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.294993 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.295015 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.295027 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.398244 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.398593 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.398666 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.398751 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.398810 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.501640 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.501704 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.501720 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.501741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.501758 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.605554 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.605616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.605628 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.605649 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.605665 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.708031 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.708092 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.708105 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.708120 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.708130 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.810994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.811039 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.811052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.811072 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.811087 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.866440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.866512 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.866525 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.866547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.866563 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.880910 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.885526 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.885563 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.885574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.885596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.885608 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.897999 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.902433 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.902475 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.902485 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.902509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.902529 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.917664 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.922862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.922927 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.922954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.922987 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.923009 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.938758 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.950492 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.950574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.950598 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.950631 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.950649 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.967344 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:22Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:22 crc kubenswrapper[4811]: E1203 00:07:22.967485 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.969243 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.969288 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.969300 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.969321 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4811]: I1203 00:07:22.969335 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.073323 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.073379 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.073392 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.073414 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.073429 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.114858 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:23 crc kubenswrapper[4811]: E1203 00:07:23.115022 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.176427 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.176473 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.176483 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.176499 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.176509 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.280138 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.280204 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.280216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.280240 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.280623 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.384098 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.384144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.384153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.384168 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.384177 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.487030 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.487093 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.487105 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.487127 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.487142 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.590390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.590460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.590473 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.590513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.590528 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.692937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.693011 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.693029 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.693054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.693070 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.796852 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.796905 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.796921 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.796943 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.796957 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.900490 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.900561 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.900600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.900646 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4811]: I1203 00:07:23.900673 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.004495 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.004571 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.004590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.004615 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.004631 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.108558 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.108636 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.108654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.108682 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.108700 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.115023 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.115084 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.115121 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:24 crc kubenswrapper[4811]: E1203 00:07:24.115352 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:24 crc kubenswrapper[4811]: E1203 00:07:24.115707 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:24 crc kubenswrapper[4811]: E1203 00:07:24.116373 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.117667 4811 scope.go:117] "RemoveContainer" containerID="46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96" Dec 03 00:07:24 crc kubenswrapper[4811]: E1203 00:07:24.117956 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.212198 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.212333 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.212368 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.212402 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.212424 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.315596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.315657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.315675 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.315700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.315720 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.419431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.419505 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.419524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.419551 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.419568 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.522780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.522845 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.522862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.522888 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.522907 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.625856 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.625923 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.625940 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.625968 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.625990 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.729233 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.729340 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.729367 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.729395 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.729415 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.833057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.833164 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.833183 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.833212 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.833229 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.936508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.936597 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.936621 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.936659 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4811]: I1203 00:07:24.936681 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.040965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.041025 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.041043 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.041067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.041085 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.114892 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:25 crc kubenswrapper[4811]: E1203 00:07:25.115106 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.144985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.145044 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.145054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.145077 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.145089 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.249024 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.249129 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.249150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.249176 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.249195 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.353397 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.353484 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.353504 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.353529 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.353546 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.456421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.456462 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.456473 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.456507 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.456521 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.559053 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.559116 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.559131 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.559156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.559168 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.662979 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.663043 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.663055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.663076 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.663090 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.766939 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.767015 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.767036 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.767065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.767085 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.870102 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.870180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.870203 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.870235 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.870292 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.973599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.973695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.973774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.973799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4811]: I1203 00:07:25.973810 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.076893 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.076965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.076981 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.077006 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.077025 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.114722 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.114811 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.114858 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:26 crc kubenswrapper[4811]: E1203 00:07:26.115065 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:26 crc kubenswrapper[4811]: E1203 00:07:26.115510 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:26 crc kubenswrapper[4811]: E1203 00:07:26.115614 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.180494 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.180539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.180551 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.180568 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.180582 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.283566 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.283666 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.283692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.283729 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.283754 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.386899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.386946 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.386956 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.386975 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.386986 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.490385 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.490432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.490444 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.490461 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.490473 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.593516 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.593613 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.593641 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.593673 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.593697 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.697684 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.697770 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.697781 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.697800 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.697814 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.800850 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.800902 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.800919 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.800945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.800962 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.904292 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.904345 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.904353 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.904371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4811]: I1203 00:07:26.904381 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.007752 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.007829 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.007856 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.007888 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.007913 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.110918 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.111058 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.111081 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.111109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.111128 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.114350 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:27 crc kubenswrapper[4811]: E1203 00:07:27.114475 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.220140 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.220185 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.220238 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.220298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.220318 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.322573 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.322631 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.322645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.322661 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.322672 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.426235 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.426360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.426385 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.426417 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.426440 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.529879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.529954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.529978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.530008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.530029 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.633899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.633959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.633977 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.634001 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.634018 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.737181 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.737257 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.737336 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.737368 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.737388 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.840639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.840698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.840715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.840739 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.840759 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.944538 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.944686 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.944758 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.944794 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4811]: I1203 00:07:27.944816 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.047913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.047977 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.048000 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.048023 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.048041 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.114737 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.114793 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.114735 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:28 crc kubenswrapper[4811]: E1203 00:07:28.114966 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:28 crc kubenswrapper[4811]: E1203 00:07:28.115083 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:28 crc kubenswrapper[4811]: E1203 00:07:28.116009 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.150772 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.150836 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.150860 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.150891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.150916 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.253945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.253994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.254014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.254048 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.254083 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.357348 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.357438 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.357455 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.357480 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.357498 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.460636 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.460697 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.460713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.460736 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.460754 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.563859 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.563905 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.563917 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.563937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.563948 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.667578 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.667634 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.667642 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.667657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.667667 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.770586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.770657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.770677 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.770700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.770718 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.874025 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.874095 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.874108 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.874132 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.874147 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.977369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.977445 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.977462 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.977497 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4811]: I1203 00:07:28.977516 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.081357 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.081455 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.081472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.081500 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.081521 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.114365 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:29 crc kubenswrapper[4811]: E1203 00:07:29.114706 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.184885 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.184945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.184955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.184980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.184994 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.288511 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.288566 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.288579 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.288605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.288626 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.392043 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.392096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.392114 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.392140 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.392153 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.495428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.495476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.495491 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.495513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.495526 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.598902 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.598955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.598971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.598996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.599014 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.701534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.701602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.701619 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.701645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.701664 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.805204 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.805319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.805350 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.805383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.805408 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.908767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.908843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.908870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.908896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4811]: I1203 00:07:29.908913 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.011986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.012042 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.012060 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.012083 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.012100 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.114416 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:30 crc kubenswrapper[4811]: E1203 00:07:30.114640 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.114823 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.114882 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.115038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.115069 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.115081 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.115100 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.115112 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4811]: E1203 00:07:30.115111 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:30 crc kubenswrapper[4811]: E1203 00:07:30.115241 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.132426 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fl6vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cce253a-e326-4d5e-9cf8-3dff3e77fcf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3062e8e63b571f936331f1f20ef4f647f428d1ce472806762c526c531513fa59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f2h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fl6vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.167035 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f237f72d-6ae6-4d17-9df8-92d9ef6532ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f847141b03d36e5971c1ba7cf6382cd8b5f39d75033bc91b9d681bd4e3eaf001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47814c93f66e211935d932de80c6e1f6de67fac1cbd99121c243026afaea6452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://669b6109c633b01427f0a86b86fd3b582aa1fa3ae54259ff0de3aa593b4aac7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a23b1f2e0720429b7ec529b3d4d23de81b43715f8c342bd142583cb13a35a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990e29c507d36884bfb3bc3240d2266c2a0dd32bca9d663630c1600673ba46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5c92452a60f78ca02be7a45ed4258605b56320b26a4e9672eb0ad33f056abe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaf3348ab96775b6781b49fc23842deae093faa692574042fc13b7aa708733d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938f149a05e07d9718006331500357bea4198b3c852635960a413c0ad5a46b6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.186807 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.208184 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.217503 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.217555 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.217571 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.217597 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.217614 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.231997 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea488900fde67cf407a18fa89a4da3716715807d54d3c1e2368be82d66ae6ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.254839 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c998b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb0758-b33b-4730-a341-cc78a072aa5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://738df3ae5a86e625d062467d9b8983242ee4336ebd5182288c1de1774add1b8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:04Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d\\\\n2025-12-03T00:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_88127b84-bbdb-4977-9d1b-f8663fcaa00d to /host/opt/cni/bin/\\\\n2025-12-03T00:06:19Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:19Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l5dzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c998b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.280567 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-56rjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbb952e-adc7-460c-994c-5620183fe85f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b522a97d861ee9675126a81f792627c854c0a1188d119662a2c73893b381be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74832337ae6de35f0127fdb0e45d37da25ce11bfe5c55659e9c4b1d6aa456099\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd9b5131207de5030def5a93b8051b35b8acc968f287b82dc0f1a19ee545ef90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f4c9d47174704b2eaea71156f7d5bea75ad2ba394dcf01303b67d2c6e9ac2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43bf617415c0db7808f4c46e3183cec42e77f16ada60e669aaa4cd12e93e412\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f6f71518b09b1b0c97a179c227174861289ddad4f32352547ee8f8b1c292968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be81b7b6c5b6ddfc659e8919a8dc18275c7f580c69592bcf24db985d36ffd85e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjmz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-56rjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.312797 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e8d9251-ed38-4134-b62e-f9a34bf4c755\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:12Z\\\",\\\"message\\\":\\\"2.414905 6786 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1203 00:07:12.414904 6786 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI1203 00:07:12.414922 6786 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1203 00:07:12.414929 6786 services_controller.go:360] Finished syncing service package-server-manager-metrics on namespace openshift-operator-lifecycle-manager for network=default : 2.424932ms\\\\nI1203 00:07:12.414934 6786 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1203 00:07:12.414930 6786 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-5w9pv before timer (time: 2025-12-03 00:07:13.61622107 +0000 UTC m=+1.917385812): skip\\\\nI1203 00:07:12.414853 6786 services_controller.go:445] Built service openshift-cluster-version/cluster-version-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1203 00:07:12.414946 6786 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1203 00:07:12.414947 6786 obj_retry.go\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms7q7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mjj8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.321382 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.323435 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.323479 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.323510 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.323598 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.328653 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pd6c8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e3bc4f8-f4c1-41bf-aa8d-6cf8aaca0c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b82966e6bec5571d303a6cca9e95d633fbbef9d432e06624599189e1f1a18bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pd6c8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.346633 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e4b0be6-68a5-4c0f-b249-136512a0d3fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e75e1d01708de2172c478bb0e68ddf67c4781120e68c2f3d0cacc459e80d03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b063d781bf2a01540f4d216ce5202d913a707a8eed1c33abd2cbdb781b5541bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93577bf5c086ec89c6b0926cf5495bdb195aac2a69247f8b2463eacafd280da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.364701 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53829eb90de4df256d0c679f171b2099b0bfc22eb0e40fe715b248bd7aee2112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.380963 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d51e76d-e9e8-46ee-b4bf-4e2306d34ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf1c4e27b3372fc712109ba6d088b06a567eeaaf008ce6b8bfcd9c565902d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7a27c9a41df16fdad66a4e0f5de7717a301c00db86dbacf4378610997a081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvhh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m46wp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.397907 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5w9pv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.417132 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9f680f8-059a-4334-afc8-226f41dbf18c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2631b0da901ad6d3813ac0e4eefb7ddb376e9bca75fb6737cc154e9336bea38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://662be78be83c4fc0261e0810b70e37365749e1ef960db2bf94ec025e90ca96fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://265e4edcc98daf63d66695692e65ca0749f6383ff716dc04b1e4f283d437f640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06379331358b7dc52c66b6847cb28ff66a2fbdc363bc0e5d3e038138981bee92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.426546 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.426614 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.426632 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.426657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.426676 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.438297 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7278dba7-5e62-413c-b7b9-3d5133ebc173\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1203 00:06:17.909606 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1203 00:06:17.909734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 00:06:17.911054 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1002135492/tls.crt::/tmp/serving-cert-1002135492/tls.key\\\\\\\"\\\\nI1203 00:06:18.205830 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 00:06:18.208236 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 00:06:18.208252 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 00:06:18.208308 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 00:06:18.208315 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 00:06:18.216013 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 00:06:18.216053 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 00:06:18.216062 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 00:06:18.216065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 00:06:18.216068 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 00:06:18.216072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 00:06:18.216636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 00:06:18.219557 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.458373 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.479125 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d494605fb954d87abfde2dc3a48b5d5e25537232b0415d574eebf5d2448f506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4d96ab719fc8ce4f98ee8ae15aa7869f469f44b45b89625846e0bddc412f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.497593 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00463350-e27b-4e14-acee-d79ff4d8eda3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a357f37aeec05aab384b4977db4f6b5e0cc9a65fcffcd180425fed75d9d0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ps4ng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bc7p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.510448 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbda0eda-b987-4be5-bf1c-a4541a9270bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2c6f29b65991f85990666d9b5ac4d86ba58d8248ab611dc49bd6ea44808a4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821cdb2d907cfda391f7d071bb7c977e8cbcbcbef90eb4a46482ee0725bff564\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:30Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.529114 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.529175 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.529191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.529633 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.529689 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.632903 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.632960 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.632976 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.632996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.633012 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.736161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.736228 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.736249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.736315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.736339 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.840323 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.840441 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.840467 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.840501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.840527 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.943630 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.943787 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.943803 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.943821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4811]: I1203 00:07:30.943833 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.047595 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.047694 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.047711 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.047736 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.047760 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.114019 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:31 crc kubenswrapper[4811]: E1203 00:07:31.114214 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.150570 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.150624 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.150641 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.150664 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.150682 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.254156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.254215 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.254239 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.254311 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.254337 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.357431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.357489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.357505 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.357527 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.357543 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.459847 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.459907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.459945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.459979 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.460005 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.563750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.563872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.563899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.563982 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.564007 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.667163 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.667225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.667247 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.667308 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.667334 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.770390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.770457 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.770478 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.770509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.770531 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.874060 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.874126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.874141 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.874183 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.874199 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.976893 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.976967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.976985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.977008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4811]: I1203 00:07:31.977025 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.079276 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.079347 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.079356 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.079371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.079379 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.114065 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.114104 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.114170 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:32 crc kubenswrapper[4811]: E1203 00:07:32.114251 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:32 crc kubenswrapper[4811]: E1203 00:07:32.114366 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:32 crc kubenswrapper[4811]: E1203 00:07:32.114469 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.182342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.182384 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.182394 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.182412 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.182422 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.285529 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.285933 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.286045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.286139 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.286233 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.390064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.391250 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.391477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.391939 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.392387 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.496814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.496847 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.496858 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.496877 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.496889 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.600346 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.600405 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.600418 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.600437 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.600449 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.703946 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.704390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.704605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.704749 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.704873 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.808362 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.808421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.808441 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.808465 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.808482 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.912299 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.912397 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.912415 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.912439 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4811]: I1203 00:07:32.912459 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.016112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.016177 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.016195 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.016220 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.016238 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.092753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.092828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.092851 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.092880 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.092902 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4811]: E1203 00:07:33.113481 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.114044 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:33 crc kubenswrapper[4811]: E1203 00:07:33.114224 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.119774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.119828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.119844 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.119868 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.119885 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4811]: E1203 00:07:33.136736 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.142129 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.142202 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.142229 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.142299 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.142336 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4811]: E1203 00:07:33.161020 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.166878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.166927 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.166945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.166969 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.166991 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4811]: E1203 00:07:33.185914 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.191900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.191971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.191986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.192014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.192031 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4811]: E1203 00:07:33.209812 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"349eda2e-d94b-4951-8a31-6d5e4dd813eb\\\",\\\"systemUUID\\\":\\\"304e3ae2-a71e-4783-94bd-e98dcbb7fc0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:33 crc kubenswrapper[4811]: E1203 00:07:33.210076 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.212211 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.212291 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.212317 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.212350 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.212377 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.315493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.315942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.316141 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.316357 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.316581 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.420319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.420380 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.420397 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.420418 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.420430 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.523692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.523741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.523757 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.523777 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.523790 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.627400 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.627480 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.627505 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.627537 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.627561 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.730195 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.730311 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.730329 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.730354 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.730375 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.832924 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.832994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.833011 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.833035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.833055 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.935907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.936316 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.936526 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.936831 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4811]: I1203 00:07:33.937038 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.040603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.040640 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.040691 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.040714 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.040728 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.114428 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.114522 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.114552 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:34 crc kubenswrapper[4811]: E1203 00:07:34.114599 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:34 crc kubenswrapper[4811]: E1203 00:07:34.114814 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:34 crc kubenswrapper[4811]: E1203 00:07:34.115117 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.143201 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.143247 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.143276 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.143295 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.143309 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.246196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.246733 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.246965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.247140 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.247353 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.350412 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.350707 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.350767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.350851 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.350912 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.454112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.454702 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.454835 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.454912 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.455088 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.557980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.558037 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.558046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.558061 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.558071 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.661337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.661386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.661401 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.661419 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.661434 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.765134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.765205 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.765222 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.765251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.765302 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.868991 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.869066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.869087 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.869115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.869137 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.972031 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.972085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.972103 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.972127 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4811]: I1203 00:07:34.972143 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.075780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.075839 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.075855 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.075888 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.075908 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.114215 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:35 crc kubenswrapper[4811]: E1203 00:07:35.114452 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.178755 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.178813 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.178828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.178847 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.178860 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.280989 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.281067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.281084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.281102 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.281114 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.384430 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.384541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.384559 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.384585 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.384604 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.487542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.487645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.487664 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.487692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.487710 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.590766 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.590843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.590875 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.590903 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.590923 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.694627 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.694686 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.694699 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.694716 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.694730 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.797730 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.797795 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.797817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.797841 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.797858 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.900313 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.900404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.900419 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.900447 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4811]: I1203 00:07:35.900463 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.003278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.003344 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.003362 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.003383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.003398 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.106145 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.106229 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.106255 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.106330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.106359 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.114951 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.115557 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:36 crc kubenswrapper[4811]: E1203 00:07:36.115623 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.115663 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:36 crc kubenswrapper[4811]: E1203 00:07:36.115780 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:36 crc kubenswrapper[4811]: E1203 00:07:36.116036 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.117544 4811 scope.go:117] "RemoveContainer" containerID="46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96" Dec 03 00:07:36 crc kubenswrapper[4811]: E1203 00:07:36.117824 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.209482 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.209558 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.209585 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.209616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.209636 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.312339 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.312406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.312423 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.312454 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.312478 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.415424 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.415503 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.415528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.415560 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.415585 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.519442 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.519513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.519537 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.519568 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.519593 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.623577 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.623664 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.623689 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.623725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.623754 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.727744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.727793 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.727810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.727834 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.727852 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.831477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.831542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.831559 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.831582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.831599 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.934835 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.934908 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.934928 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.934955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4811]: I1203 00:07:36.934974 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.037734 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.037809 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.037835 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.037863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.037884 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.114970 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:37 crc kubenswrapper[4811]: E1203 00:07:37.115132 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.141159 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.141257 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.141327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.141358 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.141400 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.228806 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs\") pod \"network-metrics-daemon-5w9pv\" (UID: \"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\") " pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:37 crc kubenswrapper[4811]: E1203 00:07:37.229014 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:37 crc kubenswrapper[4811]: E1203 00:07:37.229084 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs podName:ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c nodeName:}" failed. No retries permitted until 2025-12-03 00:08:41.229061404 +0000 UTC m=+161.370890886 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs") pod "network-metrics-daemon-5w9pv" (UID: "ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.243291 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.243343 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.243356 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.243373 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.243387 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.345985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.346284 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.346367 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.346509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.346592 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.448662 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.448764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.448784 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.448808 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.448825 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.551707 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.551765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.551774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.551788 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.551801 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.654612 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.654667 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.654683 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.654703 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.654716 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.756815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.756865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.756877 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.756896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.756911 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.859237 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.859317 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.859335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.859354 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.859367 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.962726 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.962821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.962843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.962891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4811]: I1203 00:07:37.962904 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.065913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.065995 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.066017 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.066048 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.066071 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.114710 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.114829 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:38 crc kubenswrapper[4811]: E1203 00:07:38.114906 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:38 crc kubenswrapper[4811]: E1203 00:07:38.115039 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.115138 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:38 crc kubenswrapper[4811]: E1203 00:07:38.115376 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.170884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.170928 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.170947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.170972 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.170991 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.274689 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.274738 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.274750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.274771 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.274827 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.377717 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.377752 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.377762 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.377780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.377791 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.480037 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.480382 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.480519 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.480630 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.480717 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.583825 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.583873 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.583891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.583916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.583934 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.685831 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.685884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.685902 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.685926 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.685945 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.788908 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.789351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.789591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.789856 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.790302 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.894543 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.894606 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.894626 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.894709 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.894742 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.997626 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.997696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.997717 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.997743 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4811]: I1203 00:07:38.997761 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.099848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.099891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.099901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.099918 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.099929 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.114712 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:39 crc kubenswrapper[4811]: E1203 00:07:39.114845 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.203093 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.203395 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.203754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.204072 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.204295 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.310756 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.310847 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.310879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.310912 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.310950 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.414764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.415404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.415447 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.415477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.415499 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.518501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.518567 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.518588 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.518613 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.518631 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.622015 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.623037 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.623196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.623408 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.623542 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.727392 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.727476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.727502 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.727534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.727552 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.830508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.830586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.830596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.830618 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.830628 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.933917 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.933968 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.933979 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.933997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4811]: I1203 00:07:39.934011 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.037145 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.037207 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.037224 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.037252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.037301 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.114832 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.114828 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.115037 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:40 crc kubenswrapper[4811]: E1203 00:07:40.114994 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:40 crc kubenswrapper[4811]: E1203 00:07:40.115166 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:40 crc kubenswrapper[4811]: E1203 00:07:40.115357 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.139952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.140014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.140039 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.140070 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.140093 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.152138 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=53.152111106 podStartE2EDuration="53.152111106s" podCreationTimestamp="2025-12-03 00:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:07:40.151594102 +0000 UTC m=+100.293423654" watchObservedRunningTime="2025-12-03 00:07:40.152111106 +0000 UTC m=+100.293940618" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.221716 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m46wp" podStartSLOduration=82.221678638 podStartE2EDuration="1m22.221678638s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:07:40.202458535 +0000 UTC m=+100.344288057" watchObservedRunningTime="2025-12-03 00:07:40.221678638 +0000 UTC m=+100.363508150" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.243247 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.243317 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.243331 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.243355 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.243661 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=37.243639499 podStartE2EDuration="37.243639499s" podCreationTimestamp="2025-12-03 00:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:07:40.241987127 +0000 UTC m=+100.383816639" watchObservedRunningTime="2025-12-03 00:07:40.243639499 +0000 UTC m=+100.385468981" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.243976 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.267616 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.267583912 podStartE2EDuration="1m21.267583912s" podCreationTimestamp="2025-12-03 00:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:07:40.266925906 +0000 UTC m=+100.408755418" watchObservedRunningTime="2025-12-03 00:07:40.267583912 +0000 UTC m=+100.409413414" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.346431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.346491 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.346503 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.346523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.346544 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.362370 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podStartSLOduration=82.362348729 podStartE2EDuration="1m22.362348729s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:07:40.362071272 +0000 UTC m=+100.503900744" watchObservedRunningTime="2025-12-03 00:07:40.362348729 +0000 UTC m=+100.504178211" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.392509 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.392481791 podStartE2EDuration="1m18.392481791s" podCreationTimestamp="2025-12-03 00:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:07:40.390683384 +0000 UTC m=+100.532512866" watchObservedRunningTime="2025-12-03 00:07:40.392481791 +0000 UTC m=+100.534311263" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.429919 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.429896248 podStartE2EDuration="1m22.429896248s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:07:40.426783669 +0000 UTC m=+100.568613141" watchObservedRunningTime="2025-12-03 00:07:40.429896248 +0000 UTC m=+100.571725720" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.430107 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fl6vq" podStartSLOduration=83.430099383 podStartE2EDuration="1m23.430099383s" podCreationTimestamp="2025-12-03 00:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:07:40.403163804 +0000 UTC m=+100.544993276" watchObservedRunningTime="2025-12-03 00:07:40.430099383 +0000 UTC m=+100.571928895" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.448477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.448529 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.448543 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.448563 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.448576 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.487828 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-c998b" podStartSLOduration=82.487802231 podStartE2EDuration="1m22.487802231s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:07:40.487337169 +0000 UTC m=+100.629166641" watchObservedRunningTime="2025-12-03 00:07:40.487802231 +0000 UTC m=+100.629631713" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.507178 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-56rjt" podStartSLOduration=82.507149776 podStartE2EDuration="1m22.507149776s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:07:40.506193322 +0000 UTC m=+100.648022784" watchObservedRunningTime="2025-12-03 00:07:40.507149776 +0000 UTC m=+100.648979248" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.537092 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pd6c8" podStartSLOduration=83.537068753 podStartE2EDuration="1m23.537068753s" podCreationTimestamp="2025-12-03 00:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:07:40.536096467 +0000 UTC m=+100.677925939" watchObservedRunningTime="2025-12-03 00:07:40.537068753 +0000 UTC m=+100.678898215" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.551320 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.551367 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.551376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.551392 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.551403 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.653816 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.654455 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.654488 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.654537 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.654560 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.757991 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.758046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.758057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.758075 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.758089 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.860859 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.860939 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.860958 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.860981 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.860999 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.964434 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.964499 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.964522 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.964550 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4811]: I1203 00:07:40.964572 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.068038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.068128 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.068150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.068178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.068194 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.114253 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:41 crc kubenswrapper[4811]: E1203 00:07:41.114987 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.171155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.171230 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.171251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.171309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.171326 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.274065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.274118 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.274133 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.274157 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.274173 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.377062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.377146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.377169 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.377198 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.377219 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.480201 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.480236 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.480252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.480292 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.480303 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.583247 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.583338 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.583356 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.583382 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.583401 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.686390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.686456 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.686476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.686500 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.686519 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.790646 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.790710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.790727 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.790753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.790771 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.894357 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.894464 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.894489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.894521 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4811]: I1203 00:07:41.894544 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.003942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.004015 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.004033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.004055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.004072 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.106896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.106957 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.106973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.106998 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.107015 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.114441 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.114609 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.114656 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:42 crc kubenswrapper[4811]: E1203 00:07:42.114745 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:42 crc kubenswrapper[4811]: E1203 00:07:42.114831 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:42 crc kubenswrapper[4811]: E1203 00:07:42.114925 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.210085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.210162 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.210193 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.210222 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.210244 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.312764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.312797 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.312805 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.312836 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.312846 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.414922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.414976 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.414994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.415021 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.415056 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.518211 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.518318 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.518352 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.518382 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.518403 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.622465 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.622544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.622574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.622603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.622626 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.725865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.725984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.726014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.726098 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.726130 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.829899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.829951 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.829967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.829992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.830005 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.933124 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.933166 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.933176 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.933193 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4811]: I1203 00:07:42.933204 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.036149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.036219 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.036235 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.036289 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.036308 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:43Z","lastTransitionTime":"2025-12-03T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.114564 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:43 crc kubenswrapper[4811]: E1203 00:07:43.115063 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.139860 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.139919 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.139938 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.139963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.139982 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:43Z","lastTransitionTime":"2025-12-03T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.243809 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.243870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.243886 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.243912 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.243930 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:43Z","lastTransitionTime":"2025-12-03T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.317738 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.317828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.317847 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.317906 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.317925 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:43Z","lastTransitionTime":"2025-12-03T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.371123 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r"] Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.371976 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.374170 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.374732 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.375581 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.377086 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.497571 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7bcbb559-8bde-4a06-bd48-16c4f0e251ef-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nn75r\" (UID: \"7bcbb559-8bde-4a06-bd48-16c4f0e251ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.497976 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7bcbb559-8bde-4a06-bd48-16c4f0e251ef-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nn75r\" (UID: \"7bcbb559-8bde-4a06-bd48-16c4f0e251ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.498057 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7bcbb559-8bde-4a06-bd48-16c4f0e251ef-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nn75r\" (UID: \"7bcbb559-8bde-4a06-bd48-16c4f0e251ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.498108 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bcbb559-8bde-4a06-bd48-16c4f0e251ef-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nn75r\" (UID: \"7bcbb559-8bde-4a06-bd48-16c4f0e251ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.498348 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7bcbb559-8bde-4a06-bd48-16c4f0e251ef-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nn75r\" (UID: \"7bcbb559-8bde-4a06-bd48-16c4f0e251ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.599564 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7bcbb559-8bde-4a06-bd48-16c4f0e251ef-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nn75r\" (UID: \"7bcbb559-8bde-4a06-bd48-16c4f0e251ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.599677 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7bcbb559-8bde-4a06-bd48-16c4f0e251ef-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nn75r\" (UID: \"7bcbb559-8bde-4a06-bd48-16c4f0e251ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.599730 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bcbb559-8bde-4a06-bd48-16c4f0e251ef-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nn75r\" (UID: \"7bcbb559-8bde-4a06-bd48-16c4f0e251ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.599809 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7bcbb559-8bde-4a06-bd48-16c4f0e251ef-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nn75r\" (UID: \"7bcbb559-8bde-4a06-bd48-16c4f0e251ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.599813 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7bcbb559-8bde-4a06-bd48-16c4f0e251ef-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nn75r\" (UID: \"7bcbb559-8bde-4a06-bd48-16c4f0e251ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.599900 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7bcbb559-8bde-4a06-bd48-16c4f0e251ef-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nn75r\" (UID: \"7bcbb559-8bde-4a06-bd48-16c4f0e251ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.599849 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7bcbb559-8bde-4a06-bd48-16c4f0e251ef-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nn75r\" (UID: \"7bcbb559-8bde-4a06-bd48-16c4f0e251ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.601216 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7bcbb559-8bde-4a06-bd48-16c4f0e251ef-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nn75r\" (UID: \"7bcbb559-8bde-4a06-bd48-16c4f0e251ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.609746 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bcbb559-8bde-4a06-bd48-16c4f0e251ef-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nn75r\" (UID: \"7bcbb559-8bde-4a06-bd48-16c4f0e251ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.631819 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7bcbb559-8bde-4a06-bd48-16c4f0e251ef-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nn75r\" (UID: \"7bcbb559-8bde-4a06-bd48-16c4f0e251ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.696046 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" Dec 03 00:07:43 crc kubenswrapper[4811]: W1203 00:07:43.722059 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bcbb559_8bde_4a06_bd48_16c4f0e251ef.slice/crio-4b42c19326909a0470c9552ac60f2ffd412524354e50a53251d307534a530707 WatchSource:0}: Error finding container 4b42c19326909a0470c9552ac60f2ffd412524354e50a53251d307534a530707: Status 404 returned error can't find the container with id 4b42c19326909a0470c9552ac60f2ffd412524354e50a53251d307534a530707 Dec 03 00:07:43 crc kubenswrapper[4811]: I1203 00:07:43.750853 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" event={"ID":"7bcbb559-8bde-4a06-bd48-16c4f0e251ef","Type":"ContainerStarted","Data":"4b42c19326909a0470c9552ac60f2ffd412524354e50a53251d307534a530707"} Dec 03 00:07:44 crc kubenswrapper[4811]: I1203 00:07:44.115098 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:44 crc kubenswrapper[4811]: I1203 00:07:44.115158 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:44 crc kubenswrapper[4811]: I1203 00:07:44.115116 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:44 crc kubenswrapper[4811]: E1203 00:07:44.115390 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:44 crc kubenswrapper[4811]: E1203 00:07:44.115652 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:44 crc kubenswrapper[4811]: E1203 00:07:44.115931 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:44 crc kubenswrapper[4811]: I1203 00:07:44.756961 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" event={"ID":"7bcbb559-8bde-4a06-bd48-16c4f0e251ef","Type":"ContainerStarted","Data":"6db469ea3eea391a04fce0149404ee673bbeea51b0ab46f622acfd9a7b724c60"} Dec 03 00:07:44 crc kubenswrapper[4811]: I1203 00:07:44.782342 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nn75r" podStartSLOduration=86.782313107 podStartE2EDuration="1m26.782313107s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:07:44.781193268 +0000 UTC m=+104.923022780" watchObservedRunningTime="2025-12-03 00:07:44.782313107 +0000 UTC m=+104.924142609" Dec 03 00:07:45 crc kubenswrapper[4811]: I1203 00:07:45.114158 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:45 crc kubenswrapper[4811]: E1203 00:07:45.114363 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:46 crc kubenswrapper[4811]: I1203 00:07:46.114140 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:46 crc kubenswrapper[4811]: E1203 00:07:46.114461 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:46 crc kubenswrapper[4811]: I1203 00:07:46.114192 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:46 crc kubenswrapper[4811]: E1203 00:07:46.114605 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:46 crc kubenswrapper[4811]: I1203 00:07:46.114153 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:46 crc kubenswrapper[4811]: E1203 00:07:46.114668 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:47 crc kubenswrapper[4811]: I1203 00:07:47.114406 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:47 crc kubenswrapper[4811]: E1203 00:07:47.114527 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:48 crc kubenswrapper[4811]: I1203 00:07:48.114171 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:48 crc kubenswrapper[4811]: I1203 00:07:48.114358 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:48 crc kubenswrapper[4811]: E1203 00:07:48.114528 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:48 crc kubenswrapper[4811]: I1203 00:07:48.114603 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:48 crc kubenswrapper[4811]: E1203 00:07:48.114634 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:48 crc kubenswrapper[4811]: E1203 00:07:48.114824 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:49 crc kubenswrapper[4811]: I1203 00:07:49.114991 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:49 crc kubenswrapper[4811]: E1203 00:07:49.115187 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:49 crc kubenswrapper[4811]: I1203 00:07:49.116322 4811 scope.go:117] "RemoveContainer" containerID="46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96" Dec 03 00:07:49 crc kubenswrapper[4811]: E1203 00:07:49.116580 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mjj8p_openshift-ovn-kubernetes(3e8d9251-ed38-4134-b62e-f9a34bf4c755)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" Dec 03 00:07:50 crc kubenswrapper[4811]: I1203 00:07:50.114465 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:50 crc kubenswrapper[4811]: I1203 00:07:50.114614 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:50 crc kubenswrapper[4811]: I1203 00:07:50.115660 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:50 crc kubenswrapper[4811]: E1203 00:07:50.115672 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:50 crc kubenswrapper[4811]: E1203 00:07:50.115811 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:50 crc kubenswrapper[4811]: E1203 00:07:50.115917 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:51 crc kubenswrapper[4811]: I1203 00:07:51.113908 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:51 crc kubenswrapper[4811]: E1203 00:07:51.114095 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:51 crc kubenswrapper[4811]: I1203 00:07:51.791222 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c998b_06cb0758-b33b-4730-a341-cc78a072aa5f/kube-multus/1.log" Dec 03 00:07:51 crc kubenswrapper[4811]: I1203 00:07:51.791989 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c998b_06cb0758-b33b-4730-a341-cc78a072aa5f/kube-multus/0.log" Dec 03 00:07:51 crc kubenswrapper[4811]: I1203 00:07:51.792069 4811 generic.go:334] "Generic (PLEG): container finished" podID="06cb0758-b33b-4730-a341-cc78a072aa5f" containerID="738df3ae5a86e625d062467d9b8983242ee4336ebd5182288c1de1774add1b8f" exitCode=1 Dec 03 00:07:51 crc kubenswrapper[4811]: I1203 00:07:51.792112 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c998b" event={"ID":"06cb0758-b33b-4730-a341-cc78a072aa5f","Type":"ContainerDied","Data":"738df3ae5a86e625d062467d9b8983242ee4336ebd5182288c1de1774add1b8f"} Dec 03 00:07:51 crc kubenswrapper[4811]: I1203 00:07:51.792166 4811 scope.go:117] "RemoveContainer" containerID="d500158892b24a0460d1c8328ee4f36ee17a8a95d00071e9d513a9e79e855a0c" Dec 03 00:07:51 crc kubenswrapper[4811]: I1203 00:07:51.793127 4811 scope.go:117] "RemoveContainer" containerID="738df3ae5a86e625d062467d9b8983242ee4336ebd5182288c1de1774add1b8f" Dec 03 00:07:51 crc kubenswrapper[4811]: E1203 00:07:51.793693 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-c998b_openshift-multus(06cb0758-b33b-4730-a341-cc78a072aa5f)\"" pod="openshift-multus/multus-c998b" podUID="06cb0758-b33b-4730-a341-cc78a072aa5f" Dec 03 00:07:52 crc kubenswrapper[4811]: I1203 00:07:52.114928 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:52 crc kubenswrapper[4811]: E1203 00:07:52.115128 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:52 crc kubenswrapper[4811]: I1203 00:07:52.114928 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:52 crc kubenswrapper[4811]: E1203 00:07:52.115625 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:52 crc kubenswrapper[4811]: I1203 00:07:52.115915 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:52 crc kubenswrapper[4811]: E1203 00:07:52.116169 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:52 crc kubenswrapper[4811]: I1203 00:07:52.798098 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c998b_06cb0758-b33b-4730-a341-cc78a072aa5f/kube-multus/1.log" Dec 03 00:07:53 crc kubenswrapper[4811]: I1203 00:07:53.114387 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:53 crc kubenswrapper[4811]: E1203 00:07:53.115315 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:54 crc kubenswrapper[4811]: I1203 00:07:54.114036 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:54 crc kubenswrapper[4811]: I1203 00:07:54.114131 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:54 crc kubenswrapper[4811]: E1203 00:07:54.114324 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:54 crc kubenswrapper[4811]: I1203 00:07:54.114426 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:54 crc kubenswrapper[4811]: E1203 00:07:54.114590 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:54 crc kubenswrapper[4811]: E1203 00:07:54.114647 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:55 crc kubenswrapper[4811]: I1203 00:07:55.114871 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:55 crc kubenswrapper[4811]: E1203 00:07:55.115816 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:56 crc kubenswrapper[4811]: I1203 00:07:56.114414 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:56 crc kubenswrapper[4811]: I1203 00:07:56.114466 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:56 crc kubenswrapper[4811]: I1203 00:07:56.114425 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:56 crc kubenswrapper[4811]: E1203 00:07:56.114679 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:56 crc kubenswrapper[4811]: E1203 00:07:56.114884 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:56 crc kubenswrapper[4811]: E1203 00:07:56.115052 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:57 crc kubenswrapper[4811]: I1203 00:07:57.114783 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:57 crc kubenswrapper[4811]: E1203 00:07:57.115423 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:07:58 crc kubenswrapper[4811]: I1203 00:07:58.114409 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:58 crc kubenswrapper[4811]: E1203 00:07:58.114614 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:58 crc kubenswrapper[4811]: I1203 00:07:58.114737 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:58 crc kubenswrapper[4811]: I1203 00:07:58.114800 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:58 crc kubenswrapper[4811]: E1203 00:07:58.114947 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:58 crc kubenswrapper[4811]: E1203 00:07:58.115078 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:59 crc kubenswrapper[4811]: I1203 00:07:59.114454 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:07:59 crc kubenswrapper[4811]: E1203 00:07:59.114573 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:08:00 crc kubenswrapper[4811]: E1203 00:08:00.111183 4811 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 03 00:08:00 crc kubenswrapper[4811]: I1203 00:08:00.114436 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:00 crc kubenswrapper[4811]: I1203 00:08:00.115883 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:00 crc kubenswrapper[4811]: E1203 00:08:00.115874 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:00 crc kubenswrapper[4811]: I1203 00:08:00.116004 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:00 crc kubenswrapper[4811]: E1203 00:08:00.116154 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:00 crc kubenswrapper[4811]: E1203 00:08:00.116586 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:00 crc kubenswrapper[4811]: E1203 00:08:00.212933 4811 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:08:01 crc kubenswrapper[4811]: I1203 00:08:01.114976 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:08:01 crc kubenswrapper[4811]: E1203 00:08:01.115310 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:08:02 crc kubenswrapper[4811]: I1203 00:08:02.114869 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:02 crc kubenswrapper[4811]: I1203 00:08:02.114957 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:02 crc kubenswrapper[4811]: I1203 00:08:02.115075 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:02 crc kubenswrapper[4811]: E1203 00:08:02.115537 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:02 crc kubenswrapper[4811]: E1203 00:08:02.115763 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:02 crc kubenswrapper[4811]: E1203 00:08:02.115885 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:03 crc kubenswrapper[4811]: I1203 00:08:03.114313 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:08:03 crc kubenswrapper[4811]: E1203 00:08:03.114457 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:08:03 crc kubenswrapper[4811]: I1203 00:08:03.116835 4811 scope.go:117] "RemoveContainer" containerID="46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96" Dec 03 00:08:03 crc kubenswrapper[4811]: I1203 00:08:03.847067 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/3.log" Dec 03 00:08:03 crc kubenswrapper[4811]: I1203 00:08:03.851348 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerStarted","Data":"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39"} Dec 03 00:08:03 crc kubenswrapper[4811]: I1203 00:08:03.851886 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:08:03 crc kubenswrapper[4811]: I1203 00:08:03.900130 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podStartSLOduration=105.900097706 podStartE2EDuration="1m45.900097706s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:03.895994181 +0000 UTC m=+124.037823693" watchObservedRunningTime="2025-12-03 00:08:03.900097706 +0000 UTC m=+124.041927178" Dec 03 00:08:04 crc kubenswrapper[4811]: I1203 00:08:04.114424 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:04 crc kubenswrapper[4811]: I1203 00:08:04.114461 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:04 crc kubenswrapper[4811]: I1203 00:08:04.114495 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:04 crc kubenswrapper[4811]: E1203 00:08:04.114624 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:04 crc kubenswrapper[4811]: E1203 00:08:04.114794 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:04 crc kubenswrapper[4811]: E1203 00:08:04.114952 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:04 crc kubenswrapper[4811]: I1203 00:08:04.119460 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5w9pv"] Dec 03 00:08:04 crc kubenswrapper[4811]: I1203 00:08:04.119642 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:08:04 crc kubenswrapper[4811]: E1203 00:08:04.119789 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:08:05 crc kubenswrapper[4811]: E1203 00:08:05.214359 4811 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:08:06 crc kubenswrapper[4811]: I1203 00:08:06.114475 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:06 crc kubenswrapper[4811]: E1203 00:08:06.115055 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:06 crc kubenswrapper[4811]: I1203 00:08:06.114651 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:08:06 crc kubenswrapper[4811]: I1203 00:08:06.114699 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:06 crc kubenswrapper[4811]: I1203 00:08:06.114475 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:06 crc kubenswrapper[4811]: I1203 00:08:06.115219 4811 scope.go:117] "RemoveContainer" containerID="738df3ae5a86e625d062467d9b8983242ee4336ebd5182288c1de1774add1b8f" Dec 03 00:08:06 crc kubenswrapper[4811]: E1203 00:08:06.116074 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:08:06 crc kubenswrapper[4811]: E1203 00:08:06.116345 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:06 crc kubenswrapper[4811]: E1203 00:08:06.116565 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:06 crc kubenswrapper[4811]: I1203 00:08:06.863874 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c998b_06cb0758-b33b-4730-a341-cc78a072aa5f/kube-multus/1.log" Dec 03 00:08:06 crc kubenswrapper[4811]: I1203 00:08:06.864216 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c998b" event={"ID":"06cb0758-b33b-4730-a341-cc78a072aa5f","Type":"ContainerStarted","Data":"6639175e903ec54a486b5c8fc7f020e0d9fc4edcf8b04886d8660e81e0b890f5"} Dec 03 00:08:08 crc kubenswrapper[4811]: I1203 00:08:08.114668 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:08 crc kubenswrapper[4811]: I1203 00:08:08.114760 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:08 crc kubenswrapper[4811]: I1203 00:08:08.114833 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:08 crc kubenswrapper[4811]: I1203 00:08:08.114881 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:08:08 crc kubenswrapper[4811]: E1203 00:08:08.114980 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:08 crc kubenswrapper[4811]: E1203 00:08:08.115796 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:08 crc kubenswrapper[4811]: E1203 00:08:08.116171 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:08 crc kubenswrapper[4811]: E1203 00:08:08.117302 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:08:10 crc kubenswrapper[4811]: I1203 00:08:10.114302 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:10 crc kubenswrapper[4811]: I1203 00:08:10.114473 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:10 crc kubenswrapper[4811]: I1203 00:08:10.115690 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:08:10 crc kubenswrapper[4811]: E1203 00:08:10.115697 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:10 crc kubenswrapper[4811]: I1203 00:08:10.115760 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:10 crc kubenswrapper[4811]: E1203 00:08:10.115835 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5w9pv" podUID="ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c" Dec 03 00:08:10 crc kubenswrapper[4811]: E1203 00:08:10.115934 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:10 crc kubenswrapper[4811]: E1203 00:08:10.115934 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:11 crc kubenswrapper[4811]: I1203 00:08:11.143657 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:08:12 crc kubenswrapper[4811]: I1203 00:08:12.114862 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:12 crc kubenswrapper[4811]: I1203 00:08:12.115068 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:08:12 crc kubenswrapper[4811]: I1203 00:08:12.115723 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:12 crc kubenswrapper[4811]: I1203 00:08:12.115770 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:12 crc kubenswrapper[4811]: I1203 00:08:12.119489 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 00:08:12 crc kubenswrapper[4811]: I1203 00:08:12.120031 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 00:08:12 crc kubenswrapper[4811]: I1203 00:08:12.120132 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 03 00:08:12 crc kubenswrapper[4811]: I1203 00:08:12.120446 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 00:08:12 crc kubenswrapper[4811]: I1203 00:08:12.121236 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 00:08:12 crc kubenswrapper[4811]: I1203 00:08:12.121595 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 00:08:13 crc kubenswrapper[4811]: I1203 00:08:13.977432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.030459 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.031017 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.032192 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vjx2n"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.033100 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.033323 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.033817 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.035285 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.035874 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.036004 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.036334 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.036415 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.036965 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.039843 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.039887 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.039962 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.040764 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.041908 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.042246 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.042349 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.042871 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.042985 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.043118 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.044663 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.044674 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.051032 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.051647 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.053054 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.053674 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.055684 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.064653 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.066417 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.066821 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.067983 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.069636 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.069742 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.069970 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.070125 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.070725 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.071018 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j64d2"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.071175 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.071661 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.072535 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j64d2" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.076150 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.076841 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.077152 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.077758 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.078268 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.079530 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.081805 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.082450 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06add20d-ca83-4ab5-8e5d-3238e99535df-apiservice-cert\") pod \"packageserver-d55dfcdfc-r897j\" (UID: \"06add20d-ca83-4ab5-8e5d-3238e99535df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.082509 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-client-ca\") pod \"route-controller-manager-6576b87f9c-pdrt9\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.082542 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68715556-bc95-4811-8996-d338241741e8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-w974d\" (UID: \"68715556-bc95-4811-8996-d338241741e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.082623 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-78j8h\" (UID: \"b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.082657 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b8mqn"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.094182 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.094372 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.094940 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.095431 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc0deee7-632a-406e-9ecd-b3adcea8f557-serving-cert\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.095459 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-78j8h\" (UID: \"b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.095476 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-serving-cert\") pod \"route-controller-manager-6576b87f9c-pdrt9\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.095493 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dc0deee7-632a-406e-9ecd-b3adcea8f557-etcd-ca\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.095675 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06add20d-ca83-4ab5-8e5d-3238e99535df-webhook-cert\") pod \"packageserver-d55dfcdfc-r897j\" (UID: \"06add20d-ca83-4ab5-8e5d-3238e99535df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.095697 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-config\") pod \"route-controller-manager-6576b87f9c-pdrt9\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.096057 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md6p8\" (UniqueName: \"kubernetes.io/projected/dbc0f41e-4c40-45ef-9847-d1516558f118-kube-api-access-md6p8\") pod \"machine-config-controller-84d6567774-sstkc\" (UID: \"dbc0f41e-4c40-45ef-9847-d1516558f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.096753 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.097784 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.099993 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.100625 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.100846 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wpzs6"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.101620 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wpzs6" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.101972 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.102156 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.102824 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.103410 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.105711 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.105838 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rhl4d"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.106506 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.107428 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.107656 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.110800 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.112277 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.112388 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.112658 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.112686 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.112834 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.113240 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.113756 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.113940 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.114069 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.114183 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.114307 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.114416 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.114921 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.124064 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.125033 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pstnw"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.124389 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.125204 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.125945 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5j74h"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.126535 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5j74h" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.127045 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pstnw" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.132409 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pcr7\" (UniqueName: \"kubernetes.io/projected/dc0deee7-632a-406e-9ecd-b3adcea8f557-kube-api-access-6pcr7\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.132461 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvvth\" (UniqueName: \"kubernetes.io/projected/06add20d-ca83-4ab5-8e5d-3238e99535df-kube-api-access-rvvth\") pod \"packageserver-d55dfcdfc-r897j\" (UID: \"06add20d-ca83-4ab5-8e5d-3238e99535df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.132517 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dc0deee7-632a-406e-9ecd-b3adcea8f557-etcd-service-ca\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.132537 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68715556-bc95-4811-8996-d338241741e8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-w974d\" (UID: \"68715556-bc95-4811-8996-d338241741e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.132576 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc0deee7-632a-406e-9ecd-b3adcea8f557-config\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.132594 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dbc0f41e-4c40-45ef-9847-d1516558f118-proxy-tls\") pod \"machine-config-controller-84d6567774-sstkc\" (UID: \"dbc0f41e-4c40-45ef-9847-d1516558f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.132620 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dbc0f41e-4c40-45ef-9847-d1516558f118-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-sstkc\" (UID: \"dbc0f41e-4c40-45ef-9847-d1516558f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.132651 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4ctj\" (UniqueName: \"kubernetes.io/projected/68715556-bc95-4811-8996-d338241741e8-kube-api-access-r4ctj\") pod \"openshift-apiserver-operator-796bbdcf4f-w974d\" (UID: \"68715556-bc95-4811-8996-d338241741e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.132681 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/06add20d-ca83-4ab5-8e5d-3238e99535df-tmpfs\") pod \"packageserver-d55dfcdfc-r897j\" (UID: \"06add20d-ca83-4ab5-8e5d-3238e99535df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.132700 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9djtz\" (UniqueName: \"kubernetes.io/projected/b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b-kube-api-access-9djtz\") pod \"kube-storage-version-migrator-operator-b67b599dd-78j8h\" (UID: \"b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.132717 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dc0deee7-632a-406e-9ecd-b3adcea8f557-etcd-client\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.132735 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmmq6\" (UniqueName: \"kubernetes.io/projected/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-kube-api-access-tmmq6\") pod \"route-controller-manager-6576b87f9c-pdrt9\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.133018 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7bqms"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.133754 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vscpq"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.134060 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.134243 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vscpq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.134063 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.135313 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.140845 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.141627 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.142467 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2mbnl"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.143043 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.149401 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29412000-7zxph"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.150174 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29412000-7zxph" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.150661 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.150835 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.150966 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.151088 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.151608 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.151860 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.153185 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.153442 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.153631 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.153707 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.154310 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.154544 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.154827 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.155090 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.155224 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.155550 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.155684 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.155797 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.156006 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.156038 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.156223 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.156408 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.172509 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.186854 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.187506 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.187787 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.190588 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.191352 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.191599 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.191818 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.191951 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.192010 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.195424 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.195539 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.196037 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.198006 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.199377 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-22wtr"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.200466 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.220338 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.221128 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.221582 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.221681 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-22wtr" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.222535 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.222634 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.223030 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.229581 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6jkt"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.230471 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6jkt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.232103 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.232500 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.232719 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.232831 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.233317 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-956mn"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.233400 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.233794 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-956mn" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.233947 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmmq6\" (UniqueName: \"kubernetes.io/projected/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-kube-api-access-tmmq6\") pod \"route-controller-manager-6576b87f9c-pdrt9\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.233980 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06add20d-ca83-4ab5-8e5d-3238e99535df-apiservice-cert\") pod \"packageserver-d55dfcdfc-r897j\" (UID: \"06add20d-ca83-4ab5-8e5d-3238e99535df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234003 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-client-ca\") pod \"route-controller-manager-6576b87f9c-pdrt9\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234024 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68715556-bc95-4811-8996-d338241741e8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-w974d\" (UID: \"68715556-bc95-4811-8996-d338241741e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234046 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a505c1c-0ab7-4920-b43e-6475fae9b32b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fbzcm\" (UID: \"4a505c1c-0ab7-4920-b43e-6475fae9b32b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234065 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcntv\" (UniqueName: \"kubernetes.io/projected/4a505c1c-0ab7-4920-b43e-6475fae9b32b-kube-api-access-bcntv\") pod \"package-server-manager-789f6589d5-fbzcm\" (UID: \"4a505c1c-0ab7-4920-b43e-6475fae9b32b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234084 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsxp9\" (UniqueName: \"kubernetes.io/projected/6ee2362a-613c-4927-9525-3d7f87167ab7-kube-api-access-fsxp9\") pod \"machine-api-operator-5694c8668f-7bqms\" (UID: \"6ee2362a-613c-4927-9525-3d7f87167ab7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234104 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wllng\" (UniqueName: \"kubernetes.io/projected/9746028f-d836-467c-91d8-4530f09ac665-kube-api-access-wllng\") pod \"dns-operator-744455d44c-pstnw\" (UID: \"9746028f-d836-467c-91d8-4530f09ac665\") " pod="openshift-dns-operator/dns-operator-744455d44c-pstnw" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234121 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/abf21905-6e83-49e0-b238-dae340d0bcca-srv-cert\") pod \"catalog-operator-68c6474976-ldphj\" (UID: \"abf21905-6e83-49e0-b238-dae340d0bcca\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234140 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee2362a-613c-4927-9525-3d7f87167ab7-config\") pod \"machine-api-operator-5694c8668f-7bqms\" (UID: \"6ee2362a-613c-4927-9525-3d7f87167ab7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234171 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-78j8h\" (UID: \"b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234190 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee2362a-613c-4927-9525-3d7f87167ab7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7bqms\" (UID: \"6ee2362a-613c-4927-9525-3d7f87167ab7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234209 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqxgq\" (UniqueName: \"kubernetes.io/projected/fc347b57-7a13-480c-b630-f2486ce233fc-kube-api-access-xqxgq\") pod \"collect-profiles-29412000-vfb2p\" (UID: \"fc347b57-7a13-480c-b630-f2486ce233fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234226 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc0deee7-632a-406e-9ecd-b3adcea8f557-serving-cert\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234243 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bbf9b6c9-1d96-4b24-b1d5-a3e7034af2c8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j64d2\" (UID: \"bbf9b6c9-1d96-4b24-b1d5-a3e7034af2c8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j64d2" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234307 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c85f3795-cbc4-46fe-ba79-b68904df2de3-config\") pod \"authentication-operator-69f744f599-rhl4d\" (UID: \"c85f3795-cbc4-46fe-ba79-b68904df2de3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234325 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9746028f-d836-467c-91d8-4530f09ac665-metrics-tls\") pod \"dns-operator-744455d44c-pstnw\" (UID: \"9746028f-d836-467c-91d8-4530f09ac665\") " pod="openshift-dns-operator/dns-operator-744455d44c-pstnw" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234341 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-78j8h\" (UID: \"b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234697 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-serving-cert\") pod \"route-controller-manager-6576b87f9c-pdrt9\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234732 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlb88\" (UniqueName: \"kubernetes.io/projected/6531f918-708c-4bb8-a418-d09dfb7a8b3a-kube-api-access-jlb88\") pod \"downloads-7954f5f757-5j74h\" (UID: \"6531f918-708c-4bb8-a418-d09dfb7a8b3a\") " pod="openshift-console/downloads-7954f5f757-5j74h" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234759 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dc0deee7-632a-406e-9ecd-b3adcea8f557-etcd-ca\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234780 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/748ff627-bc26-49ba-a0bd-f970f18f216f-config\") pod \"service-ca-operator-777779d784-vscpq\" (UID: \"748ff627-bc26-49ba-a0bd-f970f18f216f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vscpq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234800 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06add20d-ca83-4ab5-8e5d-3238e99535df-webhook-cert\") pod \"packageserver-d55dfcdfc-r897j\" (UID: \"06add20d-ca83-4ab5-8e5d-3238e99535df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234821 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpr5q\" (UniqueName: \"kubernetes.io/projected/abf21905-6e83-49e0-b238-dae340d0bcca-kube-api-access-rpr5q\") pod \"catalog-operator-68c6474976-ldphj\" (UID: \"abf21905-6e83-49e0-b238-dae340d0bcca\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234840 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f4726ee-a716-44e3-a2d7-cfd634b1b476-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vzhhf\" (UID: \"6f4726ee-a716-44e3-a2d7-cfd634b1b476\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.234973 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8b8z\" (UniqueName: \"kubernetes.io/projected/ac1cc52b-99ba-456c-8ffe-cebbc7d4e827-kube-api-access-k8b8z\") pod \"image-pruner-29412000-7zxph\" (UID: \"ac1cc52b-99ba-456c-8ffe-cebbc7d4e827\") " pod="openshift-image-registry/image-pruner-29412000-7zxph" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.235002 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-config\") pod \"route-controller-manager-6576b87f9c-pdrt9\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.235024 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws56k\" (UniqueName: \"kubernetes.io/projected/bbf9b6c9-1d96-4b24-b1d5-a3e7034af2c8-kube-api-access-ws56k\") pod \"cluster-samples-operator-665b6dd947-j64d2\" (UID: \"bbf9b6c9-1d96-4b24-b1d5-a3e7034af2c8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j64d2" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.235043 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/abf21905-6e83-49e0-b238-dae340d0bcca-profile-collector-cert\") pod \"catalog-operator-68c6474976-ldphj\" (UID: \"abf21905-6e83-49e0-b238-dae340d0bcca\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.235062 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md6p8\" (UniqueName: \"kubernetes.io/projected/dbc0f41e-4c40-45ef-9847-d1516558f118-kube-api-access-md6p8\") pod \"machine-config-controller-84d6567774-sstkc\" (UID: \"dbc0f41e-4c40-45ef-9847-d1516558f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.235079 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc347b57-7a13-480c-b630-f2486ce233fc-secret-volume\") pod \"collect-profiles-29412000-vfb2p\" (UID: \"fc347b57-7a13-480c-b630-f2486ce233fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.235105 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pcr7\" (UniqueName: \"kubernetes.io/projected/dc0deee7-632a-406e-9ecd-b3adcea8f557-kube-api-access-6pcr7\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.235412 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.235927 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.236023 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.236031 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dc0deee7-632a-406e-9ecd-b3adcea8f557-etcd-ca\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.236116 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.236220 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.236436 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.236650 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.236755 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.236920 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-client-ca\") pod \"route-controller-manager-6576b87f9c-pdrt9\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.235123 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b917a512-4630-408e-9d9c-0ca6808d4a5b-serving-cert\") pod \"console-operator-58897d9998-b8mqn\" (UID: \"b917a512-4630-408e-9d9c-0ca6808d4a5b\") " pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.237967 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-78j8h\" (UID: \"b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238011 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvvth\" (UniqueName: \"kubernetes.io/projected/06add20d-ca83-4ab5-8e5d-3238e99535df-kube-api-access-rvvth\") pod \"packageserver-d55dfcdfc-r897j\" (UID: \"06add20d-ca83-4ab5-8e5d-3238e99535df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238066 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb5b4076-512d-4f19-a773-36cd7a54a8a4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gwmbx\" (UID: \"cb5b4076-512d-4f19-a773-36cd7a54a8a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238107 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb5b4076-512d-4f19-a773-36cd7a54a8a4-config\") pod \"kube-apiserver-operator-766d6c64bb-gwmbx\" (UID: \"cb5b4076-512d-4f19-a773-36cd7a54a8a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238135 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb5b4076-512d-4f19-a773-36cd7a54a8a4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gwmbx\" (UID: \"cb5b4076-512d-4f19-a773-36cd7a54a8a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238164 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-config\") pod \"controller-manager-879f6c89f-2mbnl\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238194 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-serving-cert\") pod \"controller-manager-879f6c89f-2mbnl\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238220 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f4726ee-a716-44e3-a2d7-cfd634b1b476-config\") pod \"kube-controller-manager-operator-78b949d7b-vzhhf\" (UID: \"6f4726ee-a716-44e3-a2d7-cfd634b1b476\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238242 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-client-ca\") pod \"controller-manager-879f6c89f-2mbnl\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238286 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2mbnl\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238316 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szd9z\" (UniqueName: \"kubernetes.io/projected/c85f3795-cbc4-46fe-ba79-b68904df2de3-kube-api-access-szd9z\") pod \"authentication-operator-69f744f599-rhl4d\" (UID: \"c85f3795-cbc4-46fe-ba79-b68904df2de3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238361 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6ee2362a-613c-4927-9525-3d7f87167ab7-images\") pod \"machine-api-operator-5694c8668f-7bqms\" (UID: \"6ee2362a-613c-4927-9525-3d7f87167ab7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238382 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac1cc52b-99ba-456c-8ffe-cebbc7d4e827-serviceca\") pod \"image-pruner-29412000-7zxph\" (UID: \"ac1cc52b-99ba-456c-8ffe-cebbc7d4e827\") " pod="openshift-image-registry/image-pruner-29412000-7zxph" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238406 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bnng\" (UniqueName: \"kubernetes.io/projected/4a68baef-4258-4aea-b775-172682cbf844-kube-api-access-5bnng\") pod \"migrator-59844c95c7-wpzs6\" (UID: \"4a68baef-4258-4aea-b775-172682cbf844\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wpzs6" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238437 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dc0deee7-632a-406e-9ecd-b3adcea8f557-etcd-service-ca\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238460 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68715556-bc95-4811-8996-d338241741e8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-w974d\" (UID: \"68715556-bc95-4811-8996-d338241741e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238483 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f4726ee-a716-44e3-a2d7-cfd634b1b476-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vzhhf\" (UID: \"6f4726ee-a716-44e3-a2d7-cfd634b1b476\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238507 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/748ff627-bc26-49ba-a0bd-f970f18f216f-serving-cert\") pod \"service-ca-operator-777779d784-vscpq\" (UID: \"748ff627-bc26-49ba-a0bd-f970f18f216f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vscpq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238537 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c85f3795-cbc4-46fe-ba79-b68904df2de3-serving-cert\") pod \"authentication-operator-69f744f599-rhl4d\" (UID: \"c85f3795-cbc4-46fe-ba79-b68904df2de3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238558 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl54d\" (UniqueName: \"kubernetes.io/projected/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-kube-api-access-pl54d\") pod \"controller-manager-879f6c89f-2mbnl\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238620 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c85f3795-cbc4-46fe-ba79-b68904df2de3-service-ca-bundle\") pod \"authentication-operator-69f744f599-rhl4d\" (UID: \"c85f3795-cbc4-46fe-ba79-b68904df2de3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238646 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b917a512-4630-408e-9d9c-0ca6808d4a5b-trusted-ca\") pod \"console-operator-58897d9998-b8mqn\" (UID: \"b917a512-4630-408e-9d9c-0ca6808d4a5b\") " pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238673 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc0deee7-632a-406e-9ecd-b3adcea8f557-config\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238698 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dbc0f41e-4c40-45ef-9847-d1516558f118-proxy-tls\") pod \"machine-config-controller-84d6567774-sstkc\" (UID: \"dbc0f41e-4c40-45ef-9847-d1516558f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238724 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c85f3795-cbc4-46fe-ba79-b68904df2de3-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rhl4d\" (UID: \"c85f3795-cbc4-46fe-ba79-b68904df2de3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238747 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc347b57-7a13-480c-b630-f2486ce233fc-config-volume\") pod \"collect-profiles-29412000-vfb2p\" (UID: \"fc347b57-7a13-480c-b630-f2486ce233fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238780 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dbc0f41e-4c40-45ef-9847-d1516558f118-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-sstkc\" (UID: \"dbc0f41e-4c40-45ef-9847-d1516558f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238807 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b917a512-4630-408e-9d9c-0ca6808d4a5b-config\") pod \"console-operator-58897d9998-b8mqn\" (UID: \"b917a512-4630-408e-9d9c-0ca6808d4a5b\") " pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238834 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtfn8\" (UniqueName: \"kubernetes.io/projected/748ff627-bc26-49ba-a0bd-f970f18f216f-kube-api-access-qtfn8\") pod \"service-ca-operator-777779d784-vscpq\" (UID: \"748ff627-bc26-49ba-a0bd-f970f18f216f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vscpq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.238999 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gc42\" (UniqueName: \"kubernetes.io/projected/b917a512-4630-408e-9d9c-0ca6808d4a5b-kube-api-access-2gc42\") pod \"console-operator-58897d9998-b8mqn\" (UID: \"b917a512-4630-408e-9d9c-0ca6808d4a5b\") " pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.239046 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4ctj\" (UniqueName: \"kubernetes.io/projected/68715556-bc95-4811-8996-d338241741e8-kube-api-access-r4ctj\") pod \"openshift-apiserver-operator-796bbdcf4f-w974d\" (UID: \"68715556-bc95-4811-8996-d338241741e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.239080 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/06add20d-ca83-4ab5-8e5d-3238e99535df-tmpfs\") pod \"packageserver-d55dfcdfc-r897j\" (UID: \"06add20d-ca83-4ab5-8e5d-3238e99535df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.239105 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9djtz\" (UniqueName: \"kubernetes.io/projected/b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b-kube-api-access-9djtz\") pod \"kube-storage-version-migrator-operator-b67b599dd-78j8h\" (UID: \"b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.239129 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dc0deee7-632a-406e-9ecd-b3adcea8f557-etcd-client\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.242548 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dc0deee7-632a-406e-9ecd-b3adcea8f557-etcd-client\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.242852 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-serving-cert\") pod \"route-controller-manager-6576b87f9c-pdrt9\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.243300 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dc0deee7-632a-406e-9ecd-b3adcea8f557-etcd-service-ca\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.243670 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dbc0f41e-4c40-45ef-9847-d1516558f118-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-sstkc\" (UID: \"dbc0f41e-4c40-45ef-9847-d1516558f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.243711 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68715556-bc95-4811-8996-d338241741e8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-w974d\" (UID: \"68715556-bc95-4811-8996-d338241741e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.244048 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/06add20d-ca83-4ab5-8e5d-3238e99535df-tmpfs\") pod \"packageserver-d55dfcdfc-r897j\" (UID: \"06add20d-ca83-4ab5-8e5d-3238e99535df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.236645 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-config\") pod \"route-controller-manager-6576b87f9c-pdrt9\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.244807 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-78j8h\" (UID: \"b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.245725 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68715556-bc95-4811-8996-d338241741e8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-w974d\" (UID: \"68715556-bc95-4811-8996-d338241741e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.245782 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc0deee7-632a-406e-9ecd-b3adcea8f557-config\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.246397 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc0deee7-632a-406e-9ecd-b3adcea8f557-serving-cert\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.247110 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dbc0f41e-4c40-45ef-9847-d1516558f118-proxy-tls\") pod \"machine-config-controller-84d6567774-sstkc\" (UID: \"dbc0f41e-4c40-45ef-9847-d1516558f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.247689 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06add20d-ca83-4ab5-8e5d-3238e99535df-apiservice-cert\") pod \"packageserver-d55dfcdfc-r897j\" (UID: \"06add20d-ca83-4ab5-8e5d-3238e99535df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.255904 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06add20d-ca83-4ab5-8e5d-3238e99535df-webhook-cert\") pod \"packageserver-d55dfcdfc-r897j\" (UID: \"06add20d-ca83-4ab5-8e5d-3238e99535df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.256777 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.265990 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-chklk"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.266229 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.267200 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wbv94"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.267394 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.268194 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zc8fl"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.268376 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.268598 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9jgcw"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.268774 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zc8fl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.268961 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4ds94"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.269114 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.269309 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.269450 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.269846 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hd7pw"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.269967 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.271113 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cgwfj"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.271480 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.271639 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.271967 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.272586 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.273313 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.273716 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.274492 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.274634 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.275995 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.276971 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.277958 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.279613 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wpzs6"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.280632 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.281513 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vjx2n"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.282480 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.283437 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b8mqn"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.284651 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.285589 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-cdlqr"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.299801 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.301587 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.301645 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zvgcl"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.301980 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-cdlqr" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.303390 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pstnw"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.303542 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.304608 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zc8fl"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.305698 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.306842 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.307971 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vscpq"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.309183 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.310711 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.311889 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-22wtr"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.314002 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.315388 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j64d2"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.316503 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2mbnl"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.317654 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5j74h"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.319051 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rhl4d"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.320342 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-chklk"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.321088 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.322352 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7bqms"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.323387 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-ssdgl"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.325110 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8447f"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.325295 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ssdgl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.325788 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8447f" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.326153 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29412000-7zxph"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.327489 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.328820 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9jgcw"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.330011 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.331448 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ssdgl"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.332921 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-956mn"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.333321 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.334323 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6jkt"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.335641 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.337237 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4ds94"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.339699 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f4726ee-a716-44e3-a2d7-cfd634b1b476-config\") pod \"kube-controller-manager-operator-78b949d7b-vzhhf\" (UID: \"6f4726ee-a716-44e3-a2d7-cfd634b1b476\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.339734 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-client-ca\") pod \"controller-manager-879f6c89f-2mbnl\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.339758 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2mbnl\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.339782 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d5580144-bfbe-490a-a448-2c225be80621-image-import-ca\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.339801 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6ee2362a-613c-4927-9525-3d7f87167ab7-images\") pod \"machine-api-operator-5694c8668f-7bqms\" (UID: \"6ee2362a-613c-4927-9525-3d7f87167ab7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.339822 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szd9z\" (UniqueName: \"kubernetes.io/projected/c85f3795-cbc4-46fe-ba79-b68904df2de3-kube-api-access-szd9z\") pod \"authentication-operator-69f744f599-rhl4d\" (UID: \"c85f3795-cbc4-46fe-ba79-b68904df2de3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.339848 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac1cc52b-99ba-456c-8ffe-cebbc7d4e827-serviceca\") pod \"image-pruner-29412000-7zxph\" (UID: \"ac1cc52b-99ba-456c-8ffe-cebbc7d4e827\") " pod="openshift-image-registry/image-pruner-29412000-7zxph" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.339866 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bnng\" (UniqueName: \"kubernetes.io/projected/4a68baef-4258-4aea-b775-172682cbf844-kube-api-access-5bnng\") pod \"migrator-59844c95c7-wpzs6\" (UID: \"4a68baef-4258-4aea-b775-172682cbf844\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wpzs6" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.339890 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e538a4e7-dd56-48db-828e-49af39ac5def-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x87s8\" (UID: \"e538a4e7-dd56-48db-828e-49af39ac5def\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.339910 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5580144-bfbe-490a-a448-2c225be80621-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.339928 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8dc919c-6856-4c26-a76d-3ba3212fe7c3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q87bq\" (UID: \"c8dc919c-6856-4c26-a76d-3ba3212fe7c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.339948 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f4726ee-a716-44e3-a2d7-cfd634b1b476-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vzhhf\" (UID: \"6f4726ee-a716-44e3-a2d7-cfd634b1b476\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.339964 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/748ff627-bc26-49ba-a0bd-f970f18f216f-serving-cert\") pod \"service-ca-operator-777779d784-vscpq\" (UID: \"748ff627-bc26-49ba-a0bd-f970f18f216f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vscpq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.339982 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zj8w\" (UniqueName: \"kubernetes.io/projected/2735423a-1c0a-489c-ada2-f5ba5aa58397-kube-api-access-9zj8w\") pod \"machine-config-operator-74547568cd-ldnmm\" (UID: \"2735423a-1c0a-489c-ada2-f5ba5aa58397\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340002 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5580144-bfbe-490a-a448-2c225be80621-node-pullsecrets\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340019 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e538a4e7-dd56-48db-828e-49af39ac5def-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-x87s8\" (UID: \"e538a4e7-dd56-48db-828e-49af39ac5def\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340039 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2735423a-1c0a-489c-ada2-f5ba5aa58397-images\") pod \"machine-config-operator-74547568cd-ldnmm\" (UID: \"2735423a-1c0a-489c-ada2-f5ba5aa58397\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340057 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c85f3795-cbc4-46fe-ba79-b68904df2de3-serving-cert\") pod \"authentication-operator-69f744f599-rhl4d\" (UID: \"c85f3795-cbc4-46fe-ba79-b68904df2de3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340073 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl54d\" (UniqueName: \"kubernetes.io/projected/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-kube-api-access-pl54d\") pod \"controller-manager-879f6c89f-2mbnl\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340094 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/72e13ac0-ed91-41a8-8df4-ca88a2838fd3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s6jkt\" (UID: \"72e13ac0-ed91-41a8-8df4-ca88a2838fd3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6jkt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340123 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c85f3795-cbc4-46fe-ba79-b68904df2de3-service-ca-bundle\") pod \"authentication-operator-69f744f599-rhl4d\" (UID: \"c85f3795-cbc4-46fe-ba79-b68904df2de3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340143 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b917a512-4630-408e-9d9c-0ca6808d4a5b-trusted-ca\") pod \"console-operator-58897d9998-b8mqn\" (UID: \"b917a512-4630-408e-9d9c-0ca6808d4a5b\") " pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340165 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5580144-bfbe-490a-a448-2c225be80621-serving-cert\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340185 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d5580144-bfbe-490a-a448-2c225be80621-audit\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340212 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5580144-bfbe-490a-a448-2c225be80621-etcd-serving-ca\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340238 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c85f3795-cbc4-46fe-ba79-b68904df2de3-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rhl4d\" (UID: \"c85f3795-cbc4-46fe-ba79-b68904df2de3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340255 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc347b57-7a13-480c-b630-f2486ce233fc-config-volume\") pod \"collect-profiles-29412000-vfb2p\" (UID: \"fc347b57-7a13-480c-b630-f2486ce233fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340292 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2735423a-1c0a-489c-ada2-f5ba5aa58397-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ldnmm\" (UID: \"2735423a-1c0a-489c-ada2-f5ba5aa58397\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340303 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cgwfj"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340313 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b917a512-4630-408e-9d9c-0ca6808d4a5b-config\") pod \"console-operator-58897d9998-b8mqn\" (UID: \"b917a512-4630-408e-9d9c-0ca6808d4a5b\") " pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340446 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzmzm\" (UniqueName: \"kubernetes.io/projected/d5580144-bfbe-490a-a448-2c225be80621-kube-api-access-kzmzm\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340547 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtfn8\" (UniqueName: \"kubernetes.io/projected/748ff627-bc26-49ba-a0bd-f970f18f216f-kube-api-access-qtfn8\") pod \"service-ca-operator-777779d784-vscpq\" (UID: \"748ff627-bc26-49ba-a0bd-f970f18f216f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vscpq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340614 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gc42\" (UniqueName: \"kubernetes.io/projected/b917a512-4630-408e-9d9c-0ca6808d4a5b-kube-api-access-2gc42\") pod \"console-operator-58897d9998-b8mqn\" (UID: \"b917a512-4630-408e-9d9c-0ca6808d4a5b\") " pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340642 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5flqn\" (UniqueName: \"kubernetes.io/projected/72e13ac0-ed91-41a8-8df4-ca88a2838fd3-kube-api-access-5flqn\") pod \"control-plane-machine-set-operator-78cbb6b69f-s6jkt\" (UID: \"72e13ac0-ed91-41a8-8df4-ca88a2838fd3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6jkt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340728 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsnqj\" (UniqueName: \"kubernetes.io/projected/e538a4e7-dd56-48db-828e-49af39ac5def-kube-api-access-wsnqj\") pod \"cluster-image-registry-operator-dc59b4c8b-x87s8\" (UID: \"e538a4e7-dd56-48db-828e-49af39ac5def\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340774 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/513c0923-f361-46af-8761-b4d809c1b287-machine-approver-tls\") pod \"machine-approver-56656f9798-5rk4f\" (UID: \"513c0923-f361-46af-8761-b4d809c1b287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340851 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5580144-bfbe-490a-a448-2c225be80621-etcd-client\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340883 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5580144-bfbe-490a-a448-2c225be80621-encryption-config\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340928 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a505c1c-0ab7-4920-b43e-6475fae9b32b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fbzcm\" (UID: \"4a505c1c-0ab7-4920-b43e-6475fae9b32b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340951 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcntv\" (UniqueName: \"kubernetes.io/projected/4a505c1c-0ab7-4920-b43e-6475fae9b32b-kube-api-access-bcntv\") pod \"package-server-manager-789f6589d5-fbzcm\" (UID: \"4a505c1c-0ab7-4920-b43e-6475fae9b32b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.340974 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8dc919c-6856-4c26-a76d-3ba3212fe7c3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q87bq\" (UID: \"c8dc919c-6856-4c26-a76d-3ba3212fe7c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.341019 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsxp9\" (UniqueName: \"kubernetes.io/projected/6ee2362a-613c-4927-9525-3d7f87167ab7-kube-api-access-fsxp9\") pod \"machine-api-operator-5694c8668f-7bqms\" (UID: \"6ee2362a-613c-4927-9525-3d7f87167ab7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.341041 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wllng\" (UniqueName: \"kubernetes.io/projected/9746028f-d836-467c-91d8-4530f09ac665-kube-api-access-wllng\") pod \"dns-operator-744455d44c-pstnw\" (UID: \"9746028f-d836-467c-91d8-4530f09ac665\") " pod="openshift-dns-operator/dns-operator-744455d44c-pstnw" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.341096 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b917a512-4630-408e-9d9c-0ca6808d4a5b-config\") pod \"console-operator-58897d9998-b8mqn\" (UID: \"b917a512-4630-408e-9d9c-0ca6808d4a5b\") " pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.341106 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e538a4e7-dd56-48db-828e-49af39ac5def-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x87s8\" (UID: \"e538a4e7-dd56-48db-828e-49af39ac5def\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.341495 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wbv94"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.341758 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f4726ee-a716-44e3-a2d7-cfd634b1b476-config\") pod \"kube-controller-manager-operator-78b949d7b-vzhhf\" (UID: \"6f4726ee-a716-44e3-a2d7-cfd634b1b476\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.342095 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee2362a-613c-4927-9525-3d7f87167ab7-config\") pod \"machine-api-operator-5694c8668f-7bqms\" (UID: \"6ee2362a-613c-4927-9525-3d7f87167ab7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.342156 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/abf21905-6e83-49e0-b238-dae340d0bcca-srv-cert\") pod \"catalog-operator-68c6474976-ldphj\" (UID: \"abf21905-6e83-49e0-b238-dae340d0bcca\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.342228 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8dc919c-6856-4c26-a76d-3ba3212fe7c3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q87bq\" (UID: \"c8dc919c-6856-4c26-a76d-3ba3212fe7c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.342302 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmw6z\" (UniqueName: \"kubernetes.io/projected/4bdd961a-4364-4a42-b398-17570f149c42-kube-api-access-bmw6z\") pod \"service-ca-9c57cc56f-zc8fl\" (UID: \"4bdd961a-4364-4a42-b398-17570f149c42\") " pod="openshift-service-ca/service-ca-9c57cc56f-zc8fl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.343036 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-client-ca\") pod \"controller-manager-879f6c89f-2mbnl\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.343149 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zvgcl"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.344295 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2mbnl\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.344711 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac1cc52b-99ba-456c-8ffe-cebbc7d4e827-serviceca\") pod \"image-pruner-29412000-7zxph\" (UID: \"ac1cc52b-99ba-456c-8ffe-cebbc7d4e827\") " pod="openshift-image-registry/image-pruner-29412000-7zxph" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.344869 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-cdlqr"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.345308 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6ee2362a-613c-4927-9525-3d7f87167ab7-images\") pod \"machine-api-operator-5694c8668f-7bqms\" (UID: \"6ee2362a-613c-4927-9525-3d7f87167ab7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.345348 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c85f3795-cbc4-46fe-ba79-b68904df2de3-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rhl4d\" (UID: \"c85f3795-cbc4-46fe-ba79-b68904df2de3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.345466 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee2362a-613c-4927-9525-3d7f87167ab7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7bqms\" (UID: \"6ee2362a-613c-4927-9525-3d7f87167ab7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.345602 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c85f3795-cbc4-46fe-ba79-b68904df2de3-service-ca-bundle\") pod \"authentication-operator-69f744f599-rhl4d\" (UID: \"c85f3795-cbc4-46fe-ba79-b68904df2de3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.346755 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9746028f-d836-467c-91d8-4530f09ac665-metrics-tls\") pod \"dns-operator-744455d44c-pstnw\" (UID: \"9746028f-d836-467c-91d8-4530f09ac665\") " pod="openshift-dns-operator/dns-operator-744455d44c-pstnw" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.346791 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqxgq\" (UniqueName: \"kubernetes.io/projected/fc347b57-7a13-480c-b630-f2486ce233fc-kube-api-access-xqxgq\") pod \"collect-profiles-29412000-vfb2p\" (UID: \"fc347b57-7a13-480c-b630-f2486ce233fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.346819 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bbf9b6c9-1d96-4b24-b1d5-a3e7034af2c8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j64d2\" (UID: \"bbf9b6c9-1d96-4b24-b1d5-a3e7034af2c8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j64d2" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.346871 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c85f3795-cbc4-46fe-ba79-b68904df2de3-config\") pod \"authentication-operator-69f744f599-rhl4d\" (UID: \"c85f3795-cbc4-46fe-ba79-b68904df2de3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.346896 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513c0923-f361-46af-8761-b4d809c1b287-config\") pod \"machine-approver-56656f9798-5rk4f\" (UID: \"513c0923-f361-46af-8761-b4d809c1b287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.346922 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlb88\" (UniqueName: \"kubernetes.io/projected/6531f918-708c-4bb8-a418-d09dfb7a8b3a-kube-api-access-jlb88\") pod \"downloads-7954f5f757-5j74h\" (UID: \"6531f918-708c-4bb8-a418-d09dfb7a8b3a\") " pod="openshift-console/downloads-7954f5f757-5j74h" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.346945 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/748ff627-bc26-49ba-a0bd-f970f18f216f-config\") pod \"service-ca-operator-777779d784-vscpq\" (UID: \"748ff627-bc26-49ba-a0bd-f970f18f216f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vscpq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.346996 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7f7cb9ef-f206-4b06-918f-1ac96967e618-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nj6b5\" (UID: \"7f7cb9ef-f206-4b06-918f-1ac96967e618\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.347020 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpr5q\" (UniqueName: \"kubernetes.io/projected/abf21905-6e83-49e0-b238-dae340d0bcca-kube-api-access-rpr5q\") pod \"catalog-operator-68c6474976-ldphj\" (UID: \"abf21905-6e83-49e0-b238-dae340d0bcca\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.347039 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7f7cb9ef-f206-4b06-918f-1ac96967e618-srv-cert\") pod \"olm-operator-6b444d44fb-nj6b5\" (UID: \"7f7cb9ef-f206-4b06-918f-1ac96967e618\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.347059 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f4726ee-a716-44e3-a2d7-cfd634b1b476-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vzhhf\" (UID: \"6f4726ee-a716-44e3-a2d7-cfd634b1b476\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.347078 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8b8z\" (UniqueName: \"kubernetes.io/projected/ac1cc52b-99ba-456c-8ffe-cebbc7d4e827-kube-api-access-k8b8z\") pod \"image-pruner-29412000-7zxph\" (UID: \"ac1cc52b-99ba-456c-8ffe-cebbc7d4e827\") " pod="openshift-image-registry/image-pruner-29412000-7zxph" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.347095 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4bdd961a-4364-4a42-b398-17570f149c42-signing-cabundle\") pod \"service-ca-9c57cc56f-zc8fl\" (UID: \"4bdd961a-4364-4a42-b398-17570f149c42\") " pod="openshift-service-ca/service-ca-9c57cc56f-zc8fl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.347098 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc347b57-7a13-480c-b630-f2486ce233fc-config-volume\") pod \"collect-profiles-29412000-vfb2p\" (UID: \"fc347b57-7a13-480c-b630-f2486ce233fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.347237 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee2362a-613c-4927-9525-3d7f87167ab7-config\") pod \"machine-api-operator-5694c8668f-7bqms\" (UID: \"6ee2362a-613c-4927-9525-3d7f87167ab7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.347093 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b917a512-4630-408e-9d9c-0ca6808d4a5b-trusted-ca\") pod \"console-operator-58897d9998-b8mqn\" (UID: \"b917a512-4630-408e-9d9c-0ca6808d4a5b\") " pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.347631 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/748ff627-bc26-49ba-a0bd-f970f18f216f-config\") pod \"service-ca-operator-777779d784-vscpq\" (UID: \"748ff627-bc26-49ba-a0bd-f970f18f216f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vscpq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.347823 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws56k\" (UniqueName: \"kubernetes.io/projected/bbf9b6c9-1d96-4b24-b1d5-a3e7034af2c8-kube-api-access-ws56k\") pod \"cluster-samples-operator-665b6dd947-j64d2\" (UID: \"bbf9b6c9-1d96-4b24-b1d5-a3e7034af2c8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j64d2" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.347862 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ksxr\" (UniqueName: \"kubernetes.io/projected/7f7cb9ef-f206-4b06-918f-1ac96967e618-kube-api-access-7ksxr\") pod \"olm-operator-6b444d44fb-nj6b5\" (UID: \"7f7cb9ef-f206-4b06-918f-1ac96967e618\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.347897 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5580144-bfbe-490a-a448-2c225be80621-audit-dir\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.347976 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/abf21905-6e83-49e0-b238-dae340d0bcca-profile-collector-cert\") pod \"catalog-operator-68c6474976-ldphj\" (UID: \"abf21905-6e83-49e0-b238-dae340d0bcca\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.348016 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2735423a-1c0a-489c-ada2-f5ba5aa58397-proxy-tls\") pod \"machine-config-operator-74547568cd-ldnmm\" (UID: \"2735423a-1c0a-489c-ada2-f5ba5aa58397\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.348163 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/748ff627-bc26-49ba-a0bd-f970f18f216f-serving-cert\") pod \"service-ca-operator-777779d784-vscpq\" (UID: \"748ff627-bc26-49ba-a0bd-f970f18f216f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vscpq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.348193 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc347b57-7a13-480c-b630-f2486ce233fc-secret-volume\") pod \"collect-profiles-29412000-vfb2p\" (UID: \"fc347b57-7a13-480c-b630-f2486ce233fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.348201 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c85f3795-cbc4-46fe-ba79-b68904df2de3-config\") pod \"authentication-operator-69f744f599-rhl4d\" (UID: \"c85f3795-cbc4-46fe-ba79-b68904df2de3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.348441 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b917a512-4630-408e-9d9c-0ca6808d4a5b-serving-cert\") pod \"console-operator-58897d9998-b8mqn\" (UID: \"b917a512-4630-408e-9d9c-0ca6808d4a5b\") " pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.348697 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4bdd961a-4364-4a42-b398-17570f149c42-signing-key\") pod \"service-ca-9c57cc56f-zc8fl\" (UID: \"4bdd961a-4364-4a42-b398-17570f149c42\") " pod="openshift-service-ca/service-ca-9c57cc56f-zc8fl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.348784 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdr42\" (UniqueName: \"kubernetes.io/projected/513c0923-f361-46af-8761-b4d809c1b287-kube-api-access-kdr42\") pod \"machine-approver-56656f9798-5rk4f\" (UID: \"513c0923-f361-46af-8761-b4d809c1b287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.348810 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5580144-bfbe-490a-a448-2c225be80621-config\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.348849 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb5b4076-512d-4f19-a773-36cd7a54a8a4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gwmbx\" (UID: \"cb5b4076-512d-4f19-a773-36cd7a54a8a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.348900 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb5b4076-512d-4f19-a773-36cd7a54a8a4-config\") pod \"kube-apiserver-operator-766d6c64bb-gwmbx\" (UID: \"cb5b4076-512d-4f19-a773-36cd7a54a8a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.348928 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb5b4076-512d-4f19-a773-36cd7a54a8a4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gwmbx\" (UID: \"cb5b4076-512d-4f19-a773-36cd7a54a8a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.348954 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-config\") pod \"controller-manager-879f6c89f-2mbnl\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.348981 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-serving-cert\") pod \"controller-manager-879f6c89f-2mbnl\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.349059 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/513c0923-f361-46af-8761-b4d809c1b287-auth-proxy-config\") pod \"machine-approver-56656f9798-5rk4f\" (UID: \"513c0923-f361-46af-8761-b4d809c1b287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.349940 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c85f3795-cbc4-46fe-ba79-b68904df2de3-serving-cert\") pod \"authentication-operator-69f744f599-rhl4d\" (UID: \"c85f3795-cbc4-46fe-ba79-b68904df2de3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.349959 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/abf21905-6e83-49e0-b238-dae340d0bcca-srv-cert\") pod \"catalog-operator-68c6474976-ldphj\" (UID: \"abf21905-6e83-49e0-b238-dae340d0bcca\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.350707 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee2362a-613c-4927-9525-3d7f87167ab7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7bqms\" (UID: \"6ee2362a-613c-4927-9525-3d7f87167ab7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.351029 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9746028f-d836-467c-91d8-4530f09ac665-metrics-tls\") pod \"dns-operator-744455d44c-pstnw\" (UID: \"9746028f-d836-467c-91d8-4530f09ac665\") " pod="openshift-dns-operator/dns-operator-744455d44c-pstnw" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.351428 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb5b4076-512d-4f19-a773-36cd7a54a8a4-config\") pod \"kube-apiserver-operator-766d6c64bb-gwmbx\" (UID: \"cb5b4076-512d-4f19-a773-36cd7a54a8a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.351751 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-config\") pod \"controller-manager-879f6c89f-2mbnl\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.352816 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f4726ee-a716-44e3-a2d7-cfd634b1b476-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vzhhf\" (UID: \"6f4726ee-a716-44e3-a2d7-cfd634b1b476\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.352970 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc347b57-7a13-480c-b630-f2486ce233fc-secret-volume\") pod \"collect-profiles-29412000-vfb2p\" (UID: \"fc347b57-7a13-480c-b630-f2486ce233fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.353236 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.353487 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb5b4076-512d-4f19-a773-36cd7a54a8a4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gwmbx\" (UID: \"cb5b4076-512d-4f19-a773-36cd7a54a8a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.353587 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b917a512-4630-408e-9d9c-0ca6808d4a5b-serving-cert\") pod \"console-operator-58897d9998-b8mqn\" (UID: \"b917a512-4630-408e-9d9c-0ca6808d4a5b\") " pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.354213 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bbf9b6c9-1d96-4b24-b1d5-a3e7034af2c8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j64d2\" (UID: \"bbf9b6c9-1d96-4b24-b1d5-a3e7034af2c8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j64d2" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.354921 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-serving-cert\") pod \"controller-manager-879f6c89f-2mbnl\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.356698 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a505c1c-0ab7-4920-b43e-6475fae9b32b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fbzcm\" (UID: \"4a505c1c-0ab7-4920-b43e-6475fae9b32b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.356779 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/abf21905-6e83-49e0-b238-dae340d0bcca-profile-collector-cert\") pod \"catalog-operator-68c6474976-ldphj\" (UID: \"abf21905-6e83-49e0-b238-dae340d0bcca\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.383450 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.394668 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.413430 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.433618 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.449944 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8dc919c-6856-4c26-a76d-3ba3212fe7c3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q87bq\" (UID: \"c8dc919c-6856-4c26-a76d-3ba3212fe7c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.449978 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5580144-bfbe-490a-a448-2c225be80621-encryption-config\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.450027 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e538a4e7-dd56-48db-828e-49af39ac5def-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x87s8\" (UID: \"e538a4e7-dd56-48db-828e-49af39ac5def\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.450053 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8dc919c-6856-4c26-a76d-3ba3212fe7c3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q87bq\" (UID: \"c8dc919c-6856-4c26-a76d-3ba3212fe7c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.450224 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmw6z\" (UniqueName: \"kubernetes.io/projected/4bdd961a-4364-4a42-b398-17570f149c42-kube-api-access-bmw6z\") pod \"service-ca-9c57cc56f-zc8fl\" (UID: \"4bdd961a-4364-4a42-b398-17570f149c42\") " pod="openshift-service-ca/service-ca-9c57cc56f-zc8fl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.450311 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513c0923-f361-46af-8761-b4d809c1b287-config\") pod \"machine-approver-56656f9798-5rk4f\" (UID: \"513c0923-f361-46af-8761-b4d809c1b287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.450474 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7f7cb9ef-f206-4b06-918f-1ac96967e618-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nj6b5\" (UID: \"7f7cb9ef-f206-4b06-918f-1ac96967e618\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.450527 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7f7cb9ef-f206-4b06-918f-1ac96967e618-srv-cert\") pod \"olm-operator-6b444d44fb-nj6b5\" (UID: \"7f7cb9ef-f206-4b06-918f-1ac96967e618\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.450650 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4bdd961a-4364-4a42-b398-17570f149c42-signing-cabundle\") pod \"service-ca-9c57cc56f-zc8fl\" (UID: \"4bdd961a-4364-4a42-b398-17570f149c42\") " pod="openshift-service-ca/service-ca-9c57cc56f-zc8fl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.450727 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8dc919c-6856-4c26-a76d-3ba3212fe7c3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q87bq\" (UID: \"c8dc919c-6856-4c26-a76d-3ba3212fe7c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.450767 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ksxr\" (UniqueName: \"kubernetes.io/projected/7f7cb9ef-f206-4b06-918f-1ac96967e618-kube-api-access-7ksxr\") pod \"olm-operator-6b444d44fb-nj6b5\" (UID: \"7f7cb9ef-f206-4b06-918f-1ac96967e618\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.450922 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5580144-bfbe-490a-a448-2c225be80621-audit-dir\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.450944 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2735423a-1c0a-489c-ada2-f5ba5aa58397-proxy-tls\") pod \"machine-config-operator-74547568cd-ldnmm\" (UID: \"2735423a-1c0a-489c-ada2-f5ba5aa58397\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.450990 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5580144-bfbe-490a-a448-2c225be80621-audit-dir\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.451005 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4bdd961a-4364-4a42-b398-17570f149c42-signing-key\") pod \"service-ca-9c57cc56f-zc8fl\" (UID: \"4bdd961a-4364-4a42-b398-17570f149c42\") " pod="openshift-service-ca/service-ca-9c57cc56f-zc8fl" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.451044 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdr42\" (UniqueName: \"kubernetes.io/projected/513c0923-f361-46af-8761-b4d809c1b287-kube-api-access-kdr42\") pod \"machine-approver-56656f9798-5rk4f\" (UID: \"513c0923-f361-46af-8761-b4d809c1b287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.451071 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5580144-bfbe-490a-a448-2c225be80621-config\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.451183 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/513c0923-f361-46af-8761-b4d809c1b287-auth-proxy-config\") pod \"machine-approver-56656f9798-5rk4f\" (UID: \"513c0923-f361-46af-8761-b4d809c1b287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.451206 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d5580144-bfbe-490a-a448-2c225be80621-image-import-ca\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.451314 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e538a4e7-dd56-48db-828e-49af39ac5def-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x87s8\" (UID: \"e538a4e7-dd56-48db-828e-49af39ac5def\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.451341 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5580144-bfbe-490a-a448-2c225be80621-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.451369 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8dc919c-6856-4c26-a76d-3ba3212fe7c3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q87bq\" (UID: \"c8dc919c-6856-4c26-a76d-3ba3212fe7c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.451388 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5580144-bfbe-490a-a448-2c225be80621-node-pullsecrets\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.451405 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e538a4e7-dd56-48db-828e-49af39ac5def-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-x87s8\" (UID: \"e538a4e7-dd56-48db-828e-49af39ac5def\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.451421 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2735423a-1c0a-489c-ada2-f5ba5aa58397-images\") pod \"machine-config-operator-74547568cd-ldnmm\" (UID: \"2735423a-1c0a-489c-ada2-f5ba5aa58397\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.451467 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5580144-bfbe-490a-a448-2c225be80621-node-pullsecrets\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.451441 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zj8w\" (UniqueName: \"kubernetes.io/projected/2735423a-1c0a-489c-ada2-f5ba5aa58397-kube-api-access-9zj8w\") pod \"machine-config-operator-74547568cd-ldnmm\" (UID: \"2735423a-1c0a-489c-ada2-f5ba5aa58397\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.451605 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/72e13ac0-ed91-41a8-8df4-ca88a2838fd3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s6jkt\" (UID: \"72e13ac0-ed91-41a8-8df4-ca88a2838fd3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6jkt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.452140 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5580144-bfbe-490a-a448-2c225be80621-serving-cert\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.452164 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5580144-bfbe-490a-a448-2c225be80621-etcd-serving-ca\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.452184 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2735423a-1c0a-489c-ada2-f5ba5aa58397-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ldnmm\" (UID: \"2735423a-1c0a-489c-ada2-f5ba5aa58397\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.452206 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d5580144-bfbe-490a-a448-2c225be80621-audit\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.452227 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzmzm\" (UniqueName: \"kubernetes.io/projected/d5580144-bfbe-490a-a448-2c225be80621-kube-api-access-kzmzm\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.452271 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5flqn\" (UniqueName: \"kubernetes.io/projected/72e13ac0-ed91-41a8-8df4-ca88a2838fd3-kube-api-access-5flqn\") pod \"control-plane-machine-set-operator-78cbb6b69f-s6jkt\" (UID: \"72e13ac0-ed91-41a8-8df4-ca88a2838fd3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6jkt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.452316 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/513c0923-f361-46af-8761-b4d809c1b287-machine-approver-tls\") pod \"machine-approver-56656f9798-5rk4f\" (UID: \"513c0923-f361-46af-8761-b4d809c1b287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.452355 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsnqj\" (UniqueName: \"kubernetes.io/projected/e538a4e7-dd56-48db-828e-49af39ac5def-kube-api-access-wsnqj\") pod \"cluster-image-registry-operator-dc59b4c8b-x87s8\" (UID: \"e538a4e7-dd56-48db-828e-49af39ac5def\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.452378 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5580144-bfbe-490a-a448-2c225be80621-etcd-client\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.452533 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2735423a-1c0a-489c-ada2-f5ba5aa58397-images\") pod \"machine-config-operator-74547568cd-ldnmm\" (UID: \"2735423a-1c0a-489c-ada2-f5ba5aa58397\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.452826 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2735423a-1c0a-489c-ada2-f5ba5aa58397-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ldnmm\" (UID: \"2735423a-1c0a-489c-ada2-f5ba5aa58397\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.453675 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.454131 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2735423a-1c0a-489c-ada2-f5ba5aa58397-proxy-tls\") pod \"machine-config-operator-74547568cd-ldnmm\" (UID: \"2735423a-1c0a-489c-ada2-f5ba5aa58397\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.454526 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8dc919c-6856-4c26-a76d-3ba3212fe7c3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q87bq\" (UID: \"c8dc919c-6856-4c26-a76d-3ba3212fe7c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.456295 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7f7cb9ef-f206-4b06-918f-1ac96967e618-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nj6b5\" (UID: \"7f7cb9ef-f206-4b06-918f-1ac96967e618\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.456943 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/72e13ac0-ed91-41a8-8df4-ca88a2838fd3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s6jkt\" (UID: \"72e13ac0-ed91-41a8-8df4-ca88a2838fd3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6jkt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.493099 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmmq6\" (UniqueName: \"kubernetes.io/projected/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-kube-api-access-tmmq6\") pod \"route-controller-manager-6576b87f9c-pdrt9\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.513288 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md6p8\" (UniqueName: \"kubernetes.io/projected/dbc0f41e-4c40-45ef-9847-d1516558f118-kube-api-access-md6p8\") pod \"machine-config-controller-84d6567774-sstkc\" (UID: \"dbc0f41e-4c40-45ef-9847-d1516558f118\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.529872 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvvth\" (UniqueName: \"kubernetes.io/projected/06add20d-ca83-4ab5-8e5d-3238e99535df-kube-api-access-rvvth\") pod \"packageserver-d55dfcdfc-r897j\" (UID: \"06add20d-ca83-4ab5-8e5d-3238e99535df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.550500 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pcr7\" (UniqueName: \"kubernetes.io/projected/dc0deee7-632a-406e-9ecd-b3adcea8f557-kube-api-access-6pcr7\") pod \"etcd-operator-b45778765-vjx2n\" (UID: \"dc0deee7-632a-406e-9ecd-b3adcea8f557\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.571410 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4ctj\" (UniqueName: \"kubernetes.io/projected/68715556-bc95-4811-8996-d338241741e8-kube-api-access-r4ctj\") pod \"openshift-apiserver-operator-796bbdcf4f-w974d\" (UID: \"68715556-bc95-4811-8996-d338241741e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.591713 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9djtz\" (UniqueName: \"kubernetes.io/projected/b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b-kube-api-access-9djtz\") pod \"kube-storage-version-migrator-operator-b67b599dd-78j8h\" (UID: \"b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.593929 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.606402 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/513c0923-f361-46af-8761-b4d809c1b287-machine-approver-tls\") pod \"machine-approver-56656f9798-5rk4f\" (UID: \"513c0923-f361-46af-8761-b4d809c1b287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.613874 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.633907 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.651687 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.653448 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.674793 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.682520 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/513c0923-f361-46af-8761-b4d809c1b287-auth-proxy-config\") pod \"machine-approver-56656f9798-5rk4f\" (UID: \"513c0923-f361-46af-8761-b4d809c1b287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.684673 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.696736 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.701964 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513c0923-f361-46af-8761-b4d809c1b287-config\") pod \"machine-approver-56656f9798-5rk4f\" (UID: \"513c0923-f361-46af-8761-b4d809c1b287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.703401 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.713694 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.719076 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.734644 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.737763 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.744513 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.753589 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.776098 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.794439 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.811195 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5580144-bfbe-490a-a448-2c225be80621-serving-cert\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.821533 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.833936 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.851634 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5580144-bfbe-490a-a448-2c225be80621-etcd-client\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.854018 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.865761 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5580144-bfbe-490a-a448-2c225be80621-encryption-config\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.873215 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.874832 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9"] Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.884353 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5580144-bfbe-490a-a448-2c225be80621-etcd-serving-ca\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.893822 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.913836 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.924130 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d5580144-bfbe-490a-a448-2c225be80621-audit\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.934208 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.942586 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5580144-bfbe-490a-a448-2c225be80621-config\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.953125 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.962994 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d5580144-bfbe-490a-a448-2c225be80621-image-import-ca\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.995352 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 00:08:14 crc kubenswrapper[4811]: I1203 00:08:14.995893 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.004900 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5580144-bfbe-490a-a448-2c225be80621-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.014755 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.033564 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.045321 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4bdd961a-4364-4a42-b398-17570f149c42-signing-key\") pod \"service-ca-9c57cc56f-zc8fl\" (UID: \"4bdd961a-4364-4a42-b398-17570f149c42\") " pod="openshift-service-ca/service-ca-9c57cc56f-zc8fl" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.053922 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.076373 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.082383 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4bdd961a-4364-4a42-b398-17570f149c42-signing-cabundle\") pod \"service-ca-9c57cc56f-zc8fl\" (UID: \"4bdd961a-4364-4a42-b398-17570f149c42\") " pod="openshift-service-ca/service-ca-9c57cc56f-zc8fl" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.088050 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vjx2n"] Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.092854 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.114044 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.137080 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.158526 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.173997 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.193767 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.213523 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.234069 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.253640 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.278417 4811 request.go:700] Waited for 1.008677796s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-provider-selection&limit=500&resourceVersion=0 Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.282102 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.294039 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.297504 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d"] Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.316576 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 03 00:08:15 crc kubenswrapper[4811]: W1203 00:08:15.323411 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68715556_bc95_4811_8996_d338241741e8.slice/crio-b0931e0943379580756b22a2d80e5041bb3e4591bdf567017d39b5f0acd798bc WatchSource:0}: Error finding container b0931e0943379580756b22a2d80e5041bb3e4591bdf567017d39b5f0acd798bc: Status 404 returned error can't find the container with id b0931e0943379580756b22a2d80e5041bb3e4591bdf567017d39b5f0acd798bc Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.333245 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.335674 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h"] Dec 03 00:08:15 crc kubenswrapper[4811]: W1203 00:08:15.349497 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1340bea_ba1f_4a63_a4dd_ccc4534f7c0b.slice/crio-0bc2cbcaf3df9fb9586c6198ab08a6c304b12561d31d51bcac4e274b938129cc WatchSource:0}: Error finding container 0bc2cbcaf3df9fb9586c6198ab08a6c304b12561d31d51bcac4e274b938129cc: Status 404 returned error can't find the container with id 0bc2cbcaf3df9fb9586c6198ab08a6c304b12561d31d51bcac4e274b938129cc Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.351484 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j"] Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.359101 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.369210 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc"] Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.373970 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 00:08:15 crc kubenswrapper[4811]: W1203 00:08:15.379435 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06add20d_ca83_4ab5_8e5d_3238e99535df.slice/crio-141e3be54ad8b8bd8813fdf1b2f812ae28abd49d5dc1d180a144b9fdc9dd1ad1 WatchSource:0}: Error finding container 141e3be54ad8b8bd8813fdf1b2f812ae28abd49d5dc1d180a144b9fdc9dd1ad1: Status 404 returned error can't find the container with id 141e3be54ad8b8bd8813fdf1b2f812ae28abd49d5dc1d180a144b9fdc9dd1ad1 Dec 03 00:08:15 crc kubenswrapper[4811]: W1203 00:08:15.380308 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbc0f41e_4c40_45ef_9847_d1516558f118.slice/crio-047210ad82ed410f25ea46caf0f8cba784e1a50230ea874e19c1bb93d7665089 WatchSource:0}: Error finding container 047210ad82ed410f25ea46caf0f8cba784e1a50230ea874e19c1bb93d7665089: Status 404 returned error can't find the container with id 047210ad82ed410f25ea46caf0f8cba784e1a50230ea874e19c1bb93d7665089 Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.409580 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.413892 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.433480 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 00:08:15 crc kubenswrapper[4811]: E1203 00:08:15.450335 4811 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 03 00:08:15 crc kubenswrapper[4811]: E1203 00:08:15.450468 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e538a4e7-dd56-48db-828e-49af39ac5def-trusted-ca podName:e538a4e7-dd56-48db-828e-49af39ac5def nodeName:}" failed. No retries permitted until 2025-12-03 00:08:15.950422321 +0000 UTC m=+136.092251803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/e538a4e7-dd56-48db-828e-49af39ac5def-trusted-ca") pod "cluster-image-registry-operator-dc59b4c8b-x87s8" (UID: "e538a4e7-dd56-48db-828e-49af39ac5def") : failed to sync configmap cache: timed out waiting for the condition Dec 03 00:08:15 crc kubenswrapper[4811]: E1203 00:08:15.451060 4811 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 03 00:08:15 crc kubenswrapper[4811]: E1203 00:08:15.451114 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f7cb9ef-f206-4b06-918f-1ac96967e618-srv-cert podName:7f7cb9ef-f206-4b06-918f-1ac96967e618 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:15.951096598 +0000 UTC m=+136.092926071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/7f7cb9ef-f206-4b06-918f-1ac96967e618-srv-cert") pod "olm-operator-6b444d44fb-nj6b5" (UID: "7f7cb9ef-f206-4b06-918f-1ac96967e618") : failed to sync secret cache: timed out waiting for the condition Dec 03 00:08:15 crc kubenswrapper[4811]: E1203 00:08:15.452391 4811 secret.go:188] Couldn't get secret openshift-image-registry/image-registry-operator-tls: failed to sync secret cache: timed out waiting for the condition Dec 03 00:08:15 crc kubenswrapper[4811]: E1203 00:08:15.452459 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e538a4e7-dd56-48db-828e-49af39ac5def-image-registry-operator-tls podName:e538a4e7-dd56-48db-828e-49af39ac5def nodeName:}" failed. No retries permitted until 2025-12-03 00:08:15.952426893 +0000 UTC m=+136.094256365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/e538a4e7-dd56-48db-828e-49af39ac5def-image-registry-operator-tls") pod "cluster-image-registry-operator-dc59b4c8b-x87s8" (UID: "e538a4e7-dd56-48db-828e-49af39ac5def") : failed to sync secret cache: timed out waiting for the condition Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.458220 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.474549 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.493641 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.514340 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.534502 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.553615 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.585157 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.592589 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.614433 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.633723 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.653396 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.679135 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.693616 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.714170 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.734332 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.754556 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.773966 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.799842 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.814009 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.836873 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.852944 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.874216 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.893592 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.904011 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" event={"ID":"dc0deee7-632a-406e-9ecd-b3adcea8f557","Type":"ContainerStarted","Data":"33ddc37d43998a2a39fedc41c2c40d6182346fc5ac9f25c8c571f9c2214950cc"} Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.904061 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" event={"ID":"dc0deee7-632a-406e-9ecd-b3adcea8f557","Type":"ContainerStarted","Data":"4264c0a7460547402f0c4197352eb373f9a2dc7193e1c0c7ff4cc2f7036ea908"} Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.905678 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" event={"ID":"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7","Type":"ContainerStarted","Data":"ca3d4fcd3e2c13ef6c72cd3a806cd31e5883760ed4a36f4d797e70345c03b0b7"} Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.905750 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" event={"ID":"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7","Type":"ContainerStarted","Data":"b065da9920d8f7e2737b587aec2a9e30f878f5dc93d6197e817c971b1011808f"} Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.905962 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.907009 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc" event={"ID":"dbc0f41e-4c40-45ef-9847-d1516558f118","Type":"ContainerStarted","Data":"4ddd25e7ed206dcfeeb379fbd1043bc9ef106aac83346a529c3f8824976e1b3c"} Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.907060 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc" event={"ID":"dbc0f41e-4c40-45ef-9847-d1516558f118","Type":"ContainerStarted","Data":"1d10293f7717319df05b8f1f4dca6329d01cb456af59744315fc6b1594732227"} Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.907075 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc" event={"ID":"dbc0f41e-4c40-45ef-9847-d1516558f118","Type":"ContainerStarted","Data":"047210ad82ed410f25ea46caf0f8cba784e1a50230ea874e19c1bb93d7665089"} Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.907998 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h" event={"ID":"b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b","Type":"ContainerStarted","Data":"4373b7dfdf493ebfe91096dcecbd989ef78f0f9ed144219ff210d11b50958609"} Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.908029 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h" event={"ID":"b1340bea-ba1f-4a63-a4dd-ccc4534f7c0b","Type":"ContainerStarted","Data":"0bc2cbcaf3df9fb9586c6198ab08a6c304b12561d31d51bcac4e274b938129cc"} Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.909218 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d" event={"ID":"68715556-bc95-4811-8996-d338241741e8","Type":"ContainerStarted","Data":"5ce658fb55a8cd6c50793856a112f31dd672601afceca2e9afd275af03d18905"} Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.909244 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d" event={"ID":"68715556-bc95-4811-8996-d338241741e8","Type":"ContainerStarted","Data":"b0931e0943379580756b22a2d80e5041bb3e4591bdf567017d39b5f0acd798bc"} Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.911209 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" event={"ID":"06add20d-ca83-4ab5-8e5d-3238e99535df","Type":"ContainerStarted","Data":"6112e97b199fe9c3f07f025d78cb79aefe03b02f18a9acb097c42d2fb9dcbb93"} Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.911237 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" event={"ID":"06add20d-ca83-4ab5-8e5d-3238e99535df","Type":"ContainerStarted","Data":"141e3be54ad8b8bd8813fdf1b2f812ae28abd49d5dc1d180a144b9fdc9dd1ad1"} Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.912005 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.912293 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.913386 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.934525 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.953469 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.974982 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e538a4e7-dd56-48db-828e-49af39ac5def-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x87s8\" (UID: \"e538a4e7-dd56-48db-828e-49af39ac5def\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.975114 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7f7cb9ef-f206-4b06-918f-1ac96967e618-srv-cert\") pod \"olm-operator-6b444d44fb-nj6b5\" (UID: \"7f7cb9ef-f206-4b06-918f-1ac96967e618\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.975207 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e538a4e7-dd56-48db-828e-49af39ac5def-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x87s8\" (UID: \"e538a4e7-dd56-48db-828e-49af39ac5def\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.977642 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.977796 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e538a4e7-dd56-48db-828e-49af39ac5def-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x87s8\" (UID: \"e538a4e7-dd56-48db-828e-49af39ac5def\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.983351 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7f7cb9ef-f206-4b06-918f-1ac96967e618-srv-cert\") pod \"olm-operator-6b444d44fb-nj6b5\" (UID: \"7f7cb9ef-f206-4b06-918f-1ac96967e618\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" Dec 03 00:08:15 crc kubenswrapper[4811]: I1203 00:08:15.993682 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.000869 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e538a4e7-dd56-48db-828e-49af39ac5def-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x87s8\" (UID: \"e538a4e7-dd56-48db-828e-49af39ac5def\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.033788 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.053493 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.072751 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.093429 4811 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.113295 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.134693 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.177619 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.193502 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.213926 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.234038 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.253953 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.273566 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.291749 4811 request.go:700] Waited for 1.965660639s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.298335 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.344437 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bnng\" (UniqueName: \"kubernetes.io/projected/4a68baef-4258-4aea-b775-172682cbf844-kube-api-access-5bnng\") pod \"migrator-59844c95c7-wpzs6\" (UID: \"4a68baef-4258-4aea-b775-172682cbf844\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wpzs6" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.383108 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl54d\" (UniqueName: \"kubernetes.io/projected/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-kube-api-access-pl54d\") pod \"controller-manager-879f6c89f-2mbnl\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.394550 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szd9z\" (UniqueName: \"kubernetes.io/projected/c85f3795-cbc4-46fe-ba79-b68904df2de3-kube-api-access-szd9z\") pod \"authentication-operator-69f744f599-rhl4d\" (UID: \"c85f3795-cbc4-46fe-ba79-b68904df2de3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.396578 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.401810 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f4726ee-a716-44e3-a2d7-cfd634b1b476-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vzhhf\" (UID: \"6f4726ee-a716-44e3-a2d7-cfd634b1b476\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.407032 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.426576 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsxp9\" (UniqueName: \"kubernetes.io/projected/6ee2362a-613c-4927-9525-3d7f87167ab7-kube-api-access-fsxp9\") pod \"machine-api-operator-5694c8668f-7bqms\" (UID: \"6ee2362a-613c-4927-9525-3d7f87167ab7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.429201 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.449483 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wllng\" (UniqueName: \"kubernetes.io/projected/9746028f-d836-467c-91d8-4530f09ac665-kube-api-access-wllng\") pod \"dns-operator-744455d44c-pstnw\" (UID: \"9746028f-d836-467c-91d8-4530f09ac665\") " pod="openshift-dns-operator/dns-operator-744455d44c-pstnw" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.461070 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcntv\" (UniqueName: \"kubernetes.io/projected/4a505c1c-0ab7-4920-b43e-6475fae9b32b-kube-api-access-bcntv\") pod \"package-server-manager-789f6589d5-fbzcm\" (UID: \"4a505c1c-0ab7-4920-b43e-6475fae9b32b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.479736 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtfn8\" (UniqueName: \"kubernetes.io/projected/748ff627-bc26-49ba-a0bd-f970f18f216f-kube-api-access-qtfn8\") pod \"service-ca-operator-777779d784-vscpq\" (UID: \"748ff627-bc26-49ba-a0bd-f970f18f216f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vscpq" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.490162 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gc42\" (UniqueName: \"kubernetes.io/projected/b917a512-4630-408e-9d9c-0ca6808d4a5b-kube-api-access-2gc42\") pod \"console-operator-58897d9998-b8mqn\" (UID: \"b917a512-4630-408e-9d9c-0ca6808d4a5b\") " pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.491047 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.508890 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpr5q\" (UniqueName: \"kubernetes.io/projected/abf21905-6e83-49e0-b238-dae340d0bcca-kube-api-access-rpr5q\") pod \"catalog-operator-68c6474976-ldphj\" (UID: \"abf21905-6e83-49e0-b238-dae340d0bcca\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.538458 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqxgq\" (UniqueName: \"kubernetes.io/projected/fc347b57-7a13-480c-b630-f2486ce233fc-kube-api-access-xqxgq\") pod \"collect-profiles-29412000-vfb2p\" (UID: \"fc347b57-7a13-480c-b630-f2486ce233fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.572639 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.587816 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.597826 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlb88\" (UniqueName: \"kubernetes.io/projected/6531f918-708c-4bb8-a418-d09dfb7a8b3a-kube-api-access-jlb88\") pod \"downloads-7954f5f757-5j74h\" (UID: \"6531f918-708c-4bb8-a418-d09dfb7a8b3a\") " pod="openshift-console/downloads-7954f5f757-5j74h" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.601562 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws56k\" (UniqueName: \"kubernetes.io/projected/bbf9b6c9-1d96-4b24-b1d5-a3e7034af2c8-kube-api-access-ws56k\") pod \"cluster-samples-operator-665b6dd947-j64d2\" (UID: \"bbf9b6c9-1d96-4b24-b1d5-a3e7034af2c8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j64d2" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.601788 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8b8z\" (UniqueName: \"kubernetes.io/projected/ac1cc52b-99ba-456c-8ffe-cebbc7d4e827-kube-api-access-k8b8z\") pod \"image-pruner-29412000-7zxph\" (UID: \"ac1cc52b-99ba-456c-8ffe-cebbc7d4e827\") " pod="openshift-image-registry/image-pruner-29412000-7zxph" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.618485 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb5b4076-512d-4f19-a773-36cd7a54a8a4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gwmbx\" (UID: \"cb5b4076-512d-4f19-a773-36cd7a54a8a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.622405 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.631171 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wpzs6" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.634804 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8dc919c-6856-4c26-a76d-3ba3212fe7c3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q87bq\" (UID: \"c8dc919c-6856-4c26-a76d-3ba3212fe7c3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.643426 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.658783 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmw6z\" (UniqueName: \"kubernetes.io/projected/4bdd961a-4364-4a42-b398-17570f149c42-kube-api-access-bmw6z\") pod \"service-ca-9c57cc56f-zc8fl\" (UID: \"4bdd961a-4364-4a42-b398-17570f149c42\") " pod="openshift-service-ca/service-ca-9c57cc56f-zc8fl" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.671713 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ksxr\" (UniqueName: \"kubernetes.io/projected/7f7cb9ef-f206-4b06-918f-1ac96967e618-kube-api-access-7ksxr\") pod \"olm-operator-6b444d44fb-nj6b5\" (UID: \"7f7cb9ef-f206-4b06-918f-1ac96967e618\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.680677 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.691516 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdr42\" (UniqueName: \"kubernetes.io/projected/513c0923-f361-46af-8761-b4d809c1b287-kube-api-access-kdr42\") pod \"machine-approver-56656f9798-5rk4f\" (UID: \"513c0923-f361-46af-8761-b4d809c1b287\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.714373 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5j74h" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.716015 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e538a4e7-dd56-48db-828e-49af39ac5def-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-x87s8\" (UID: \"e538a4e7-dd56-48db-828e-49af39ac5def\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.722058 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pstnw" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.729830 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rhl4d"] Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.736967 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5flqn\" (UniqueName: \"kubernetes.io/projected/72e13ac0-ed91-41a8-8df4-ca88a2838fd3-kube-api-access-5flqn\") pod \"control-plane-machine-set-operator-78cbb6b69f-s6jkt\" (UID: \"72e13ac0-ed91-41a8-8df4-ca88a2838fd3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6jkt" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.737297 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vscpq" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.751951 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.757565 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzmzm\" (UniqueName: \"kubernetes.io/projected/d5580144-bfbe-490a-a448-2c225be80621-kube-api-access-kzmzm\") pod \"apiserver-76f77b778f-wbv94\" (UID: \"d5580144-bfbe-490a-a448-2c225be80621\") " pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.772548 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsnqj\" (UniqueName: \"kubernetes.io/projected/e538a4e7-dd56-48db-828e-49af39ac5def-kube-api-access-wsnqj\") pod \"cluster-image-registry-operator-dc59b4c8b-x87s8\" (UID: \"e538a4e7-dd56-48db-828e-49af39ac5def\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:16 crc kubenswrapper[4811]: W1203 00:08:16.791965 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc85f3795_cbc4_46fe_ba79_b68904df2de3.slice/crio-84950bf9b997ef57dc88b1b2081d4d67f5d325b138e47d14a37f89c9935e082f WatchSource:0}: Error finding container 84950bf9b997ef57dc88b1b2081d4d67f5d325b138e47d14a37f89c9935e082f: Status 404 returned error can't find the container with id 84950bf9b997ef57dc88b1b2081d4d67f5d325b138e47d14a37f89c9935e082f Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.792428 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zj8w\" (UniqueName: \"kubernetes.io/projected/2735423a-1c0a-489c-ada2-f5ba5aa58397-kube-api-access-9zj8w\") pod \"machine-config-operator-74547568cd-ldnmm\" (UID: \"2735423a-1c0a-489c-ada2-f5ba5aa58397\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.793406 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29412000-7zxph" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.813087 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.817320 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7bqms"] Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.832185 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.852037 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6jkt" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.893382 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j64d2" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.894428 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896467 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-trusted-ca-bundle\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896518 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896542 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84658e5e-5af1-49b0-a45a-f7aa7a852a98-metrics-certs\") pod \"router-default-5444994796-hd7pw\" (UID: \"84658e5e-5af1-49b0-a45a-f7aa7a852a98\") " pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896561 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gvsv\" (UniqueName: \"kubernetes.io/projected/84658e5e-5af1-49b0-a45a-f7aa7a852a98-kube-api-access-7gvsv\") pod \"router-default-5444994796-hd7pw\" (UID: \"84658e5e-5af1-49b0-a45a-f7aa7a852a98\") " pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896580 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-registry-tls\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896601 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/71d18873-465e-4bc9-aca1-149975060eff-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896621 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896709 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896729 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8vbw\" (UniqueName: \"kubernetes.io/projected/0a683a3a-18b1-4377-9a7a-a00baeed9bc8-kube-api-access-p8vbw\") pod \"ingress-operator-5b745b69d9-nscpq\" (UID: \"0a683a3a-18b1-4377-9a7a-a00baeed9bc8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896750 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896770 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896802 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9qzh\" (UniqueName: \"kubernetes.io/projected/5f915f72-36dd-40f3-a47c-7245505bf997-kube-api-access-m9qzh\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896819 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f915f72-36dd-40f3-a47c-7245505bf997-audit-dir\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896854 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87d9c733-9e98-470e-9c8d-cb8b1817784c-serving-cert\") pod \"openshift-config-operator-7777fb866f-chklk\" (UID: \"87d9c733-9e98-470e-9c8d-cb8b1817784c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896873 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a683a3a-18b1-4377-9a7a-a00baeed9bc8-trusted-ca\") pod \"ingress-operator-5b745b69d9-nscpq\" (UID: \"0a683a3a-18b1-4377-9a7a-a00baeed9bc8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896892 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-console-oauth-config\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896912 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-audit-policies\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896930 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwbt4\" (UniqueName: \"kubernetes.io/projected/97cfb89f-0902-42fd-9e4d-a1459f2f2511-kube-api-access-wwbt4\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896970 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a683a3a-18b1-4377-9a7a-a00baeed9bc8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nscpq\" (UID: \"0a683a3a-18b1-4377-9a7a-a00baeed9bc8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.896999 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-console-serving-cert\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.897028 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901a390d-2893-422d-ac24-660162a0cc6e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7fqbx\" (UID: \"901a390d-2893-422d-ac24-660162a0cc6e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.897048 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5f915f72-36dd-40f3-a47c-7245505bf997-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.897068 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-956mn\" (UID: \"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401\") " pod="openshift-marketplace/marketplace-operator-79b997595-956mn" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.897089 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-956mn\" (UID: \"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401\") " pod="openshift-marketplace/marketplace-operator-79b997595-956mn" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.897119 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.897136 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5f915f72-36dd-40f3-a47c-7245505bf997-audit-policies\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.897176 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/71d18873-465e-4bc9-aca1-149975060eff-registry-certificates\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.897196 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/87d9c733-9e98-470e-9c8d-cb8b1817784c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-chklk\" (UID: \"87d9c733-9e98-470e-9c8d-cb8b1817784c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.897216 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tswgx\" (UniqueName: \"kubernetes.io/projected/901a390d-2893-422d-ac24-660162a0cc6e-kube-api-access-tswgx\") pod \"openshift-controller-manager-operator-756b6f6bc6-7fqbx\" (UID: \"901a390d-2893-422d-ac24-660162a0cc6e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.897234 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5f915f72-36dd-40f3-a47c-7245505bf997-etcd-client\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.897285 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.897968 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-console-config\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898030 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898086 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-bound-sa-token\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898117 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71d18873-465e-4bc9-aca1-149975060eff-trusted-ca\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898139 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9d15a09-7646-4c97-ba16-3a1fd2d212ab-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-22wtr\" (UID: \"d9d15a09-7646-4c97-ba16-3a1fd2d212ab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-22wtr" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898195 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/84658e5e-5af1-49b0-a45a-f7aa7a852a98-default-certificate\") pod \"router-default-5444994796-hd7pw\" (UID: \"84658e5e-5af1-49b0-a45a-f7aa7a852a98\") " pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898214 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5f915f72-36dd-40f3-a47c-7245505bf997-encryption-config\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898320 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898344 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97cfb89f-0902-42fd-9e4d-a1459f2f2511-audit-dir\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898365 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898603 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898673 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898693 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdp4n\" (UniqueName: \"kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-kube-api-access-xdp4n\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898740 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvrdw\" (UniqueName: \"kubernetes.io/projected/87d9c733-9e98-470e-9c8d-cb8b1817784c-kube-api-access-pvrdw\") pod \"openshift-config-operator-7777fb866f-chklk\" (UID: \"87d9c733-9e98-470e-9c8d-cb8b1817784c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898785 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84658e5e-5af1-49b0-a45a-f7aa7a852a98-service-ca-bundle\") pod \"router-default-5444994796-hd7pw\" (UID: \"84658e5e-5af1-49b0-a45a-f7aa7a852a98\") " pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898804 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f915f72-36dd-40f3-a47c-7245505bf997-serving-cert\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898840 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/71d18873-465e-4bc9-aca1-149975060eff-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898870 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xgmp\" (UniqueName: \"kubernetes.io/projected/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-kube-api-access-8xgmp\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898891 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/901a390d-2893-422d-ac24-660162a0cc6e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7fqbx\" (UID: \"901a390d-2893-422d-ac24-660162a0cc6e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898909 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-oauth-serving-cert\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898926 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f915f72-36dd-40f3-a47c-7245505bf997-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.898964 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmfbq\" (UniqueName: \"kubernetes.io/projected/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-kube-api-access-mmfbq\") pod \"marketplace-operator-79b997595-956mn\" (UID: \"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401\") " pod="openshift-marketplace/marketplace-operator-79b997595-956mn" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.908249 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a683a3a-18b1-4377-9a7a-a00baeed9bc8-metrics-tls\") pod \"ingress-operator-5b745b69d9-nscpq\" (UID: \"0a683a3a-18b1-4377-9a7a-a00baeed9bc8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.908996 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r99b\" (UniqueName: \"kubernetes.io/projected/d9d15a09-7646-4c97-ba16-3a1fd2d212ab-kube-api-access-6r99b\") pod \"multus-admission-controller-857f4d67dd-22wtr\" (UID: \"d9d15a09-7646-4c97-ba16-3a1fd2d212ab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-22wtr" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.909040 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/84658e5e-5af1-49b0-a45a-f7aa7a852a98-stats-auth\") pod \"router-default-5444994796-hd7pw\" (UID: \"84658e5e-5af1-49b0-a45a-f7aa7a852a98\") " pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.909092 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-service-ca\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:16 crc kubenswrapper[4811]: E1203 00:08:16.910501 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:17.410476994 +0000 UTC m=+137.552306466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.918472 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.937453 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zc8fl" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.965279 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p"] Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.965668 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.978514 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" event={"ID":"6ee2362a-613c-4927-9525-3d7f87167ab7","Type":"ContainerStarted","Data":"ae62b0b212c9acc66e2f70fa6198690ae1bbfc4fe7bab82e4559b5ebe50effbf"} Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.982558 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" Dec 03 00:08:16 crc kubenswrapper[4811]: I1203 00:08:16.991516 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" event={"ID":"c85f3795-cbc4-46fe-ba79-b68904df2de3","Type":"ContainerStarted","Data":"84950bf9b997ef57dc88b1b2081d4d67f5d325b138e47d14a37f89c9935e082f"} Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.010445 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011278 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84658e5e-5af1-49b0-a45a-f7aa7a852a98-service-ca-bundle\") pod \"router-default-5444994796-hd7pw\" (UID: \"84658e5e-5af1-49b0-a45a-f7aa7a852a98\") " pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011311 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f915f72-36dd-40f3-a47c-7245505bf997-serving-cert\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011343 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/71d18873-465e-4bc9-aca1-149975060eff-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011373 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ac601043-ae94-4202-a446-ed478524075b-csi-data-dir\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011399 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xgmp\" (UniqueName: \"kubernetes.io/projected/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-kube-api-access-8xgmp\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011443 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/901a390d-2893-422d-ac24-660162a0cc6e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7fqbx\" (UID: \"901a390d-2893-422d-ac24-660162a0cc6e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011462 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-oauth-serving-cert\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011479 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f915f72-36dd-40f3-a47c-7245505bf997-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011500 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ac601043-ae94-4202-a446-ed478524075b-plugins-dir\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011538 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmfbq\" (UniqueName: \"kubernetes.io/projected/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-kube-api-access-mmfbq\") pod \"marketplace-operator-79b997595-956mn\" (UID: \"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401\") " pod="openshift-marketplace/marketplace-operator-79b997595-956mn" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011556 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a683a3a-18b1-4377-9a7a-a00baeed9bc8-metrics-tls\") pod \"ingress-operator-5b745b69d9-nscpq\" (UID: \"0a683a3a-18b1-4377-9a7a-a00baeed9bc8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011591 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r99b\" (UniqueName: \"kubernetes.io/projected/d9d15a09-7646-4c97-ba16-3a1fd2d212ab-kube-api-access-6r99b\") pod \"multus-admission-controller-857f4d67dd-22wtr\" (UID: \"d9d15a09-7646-4c97-ba16-3a1fd2d212ab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-22wtr" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011628 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/84658e5e-5af1-49b0-a45a-f7aa7a852a98-stats-auth\") pod \"router-default-5444994796-hd7pw\" (UID: \"84658e5e-5af1-49b0-a45a-f7aa7a852a98\") " pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011665 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-service-ca\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011713 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-trusted-ca-bundle\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011745 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011762 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84658e5e-5af1-49b0-a45a-f7aa7a852a98-metrics-certs\") pod \"router-default-5444994796-hd7pw\" (UID: \"84658e5e-5af1-49b0-a45a-f7aa7a852a98\") " pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011780 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gvsv\" (UniqueName: \"kubernetes.io/projected/84658e5e-5af1-49b0-a45a-f7aa7a852a98-kube-api-access-7gvsv\") pod \"router-default-5444994796-hd7pw\" (UID: \"84658e5e-5af1-49b0-a45a-f7aa7a852a98\") " pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011801 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b4hm\" (UniqueName: \"kubernetes.io/projected/2393a548-e232-4d77-bbc2-f3daa72338c4-kube-api-access-8b4hm\") pod \"dns-default-cdlqr\" (UID: \"2393a548-e232-4d77-bbc2-f3daa72338c4\") " pod="openshift-dns/dns-default-cdlqr" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011833 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-registry-tls\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011854 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/71d18873-465e-4bc9-aca1-149975060eff-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011873 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqc5k\" (UniqueName: \"kubernetes.io/projected/d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf-kube-api-access-zqc5k\") pod \"machine-config-server-8447f\" (UID: \"d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf\") " pod="openshift-machine-config-operator/machine-config-server-8447f" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011895 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.011938 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2cww\" (UniqueName: \"kubernetes.io/projected/b80158bd-99fb-4f61-a9bc-b3f16b0ec9a0-kube-api-access-n2cww\") pod \"ingress-canary-ssdgl\" (UID: \"b80158bd-99fb-4f61-a9bc-b3f16b0ec9a0\") " pod="openshift-ingress-canary/ingress-canary-ssdgl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012010 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012032 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8vbw\" (UniqueName: \"kubernetes.io/projected/0a683a3a-18b1-4377-9a7a-a00baeed9bc8-kube-api-access-p8vbw\") pod \"ingress-operator-5b745b69d9-nscpq\" (UID: \"0a683a3a-18b1-4377-9a7a-a00baeed9bc8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012056 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012085 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5llrs\" (UniqueName: \"kubernetes.io/projected/ac601043-ae94-4202-a446-ed478524075b-kube-api-access-5llrs\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012119 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012148 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9qzh\" (UniqueName: \"kubernetes.io/projected/5f915f72-36dd-40f3-a47c-7245505bf997-kube-api-access-m9qzh\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012169 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f915f72-36dd-40f3-a47c-7245505bf997-audit-dir\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012202 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87d9c733-9e98-470e-9c8d-cb8b1817784c-serving-cert\") pod \"openshift-config-operator-7777fb866f-chklk\" (UID: \"87d9c733-9e98-470e-9c8d-cb8b1817784c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012219 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a683a3a-18b1-4377-9a7a-a00baeed9bc8-trusted-ca\") pod \"ingress-operator-5b745b69d9-nscpq\" (UID: \"0a683a3a-18b1-4377-9a7a-a00baeed9bc8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012238 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-console-oauth-config\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012353 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-audit-policies\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012375 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwbt4\" (UniqueName: \"kubernetes.io/projected/97cfb89f-0902-42fd-9e4d-a1459f2f2511-kube-api-access-wwbt4\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012412 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a683a3a-18b1-4377-9a7a-a00baeed9bc8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nscpq\" (UID: \"0a683a3a-18b1-4377-9a7a-a00baeed9bc8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012445 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-console-serving-cert\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012467 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2393a548-e232-4d77-bbc2-f3daa72338c4-metrics-tls\") pod \"dns-default-cdlqr\" (UID: \"2393a548-e232-4d77-bbc2-f3daa72338c4\") " pod="openshift-dns/dns-default-cdlqr" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012484 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf-node-bootstrap-token\") pod \"machine-config-server-8447f\" (UID: \"d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf\") " pod="openshift-machine-config-operator/machine-config-server-8447f" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012523 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901a390d-2893-422d-ac24-660162a0cc6e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7fqbx\" (UID: \"901a390d-2893-422d-ac24-660162a0cc6e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012539 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5f915f72-36dd-40f3-a47c-7245505bf997-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012572 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-956mn\" (UID: \"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401\") " pod="openshift-marketplace/marketplace-operator-79b997595-956mn" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012591 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-956mn\" (UID: \"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401\") " pod="openshift-marketplace/marketplace-operator-79b997595-956mn" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012633 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012649 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5f915f72-36dd-40f3-a47c-7245505bf997-audit-policies\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012678 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf-certs\") pod \"machine-config-server-8447f\" (UID: \"d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf\") " pod="openshift-machine-config-operator/machine-config-server-8447f" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012703 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b80158bd-99fb-4f61-a9bc-b3f16b0ec9a0-cert\") pod \"ingress-canary-ssdgl\" (UID: \"b80158bd-99fb-4f61-a9bc-b3f16b0ec9a0\") " pod="openshift-ingress-canary/ingress-canary-ssdgl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012718 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac601043-ae94-4202-a446-ed478524075b-socket-dir\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012736 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/71d18873-465e-4bc9-aca1-149975060eff-registry-certificates\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012753 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/87d9c733-9e98-470e-9c8d-cb8b1817784c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-chklk\" (UID: \"87d9c733-9e98-470e-9c8d-cb8b1817784c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012770 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tswgx\" (UniqueName: \"kubernetes.io/projected/901a390d-2893-422d-ac24-660162a0cc6e-kube-api-access-tswgx\") pod \"openshift-controller-manager-operator-756b6f6bc6-7fqbx\" (UID: \"901a390d-2893-422d-ac24-660162a0cc6e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012804 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5f915f72-36dd-40f3-a47c-7245505bf997-etcd-client\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012824 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012843 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-console-config\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012860 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ac601043-ae94-4202-a446-ed478524075b-mountpoint-dir\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012887 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012904 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-bound-sa-token\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012922 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71d18873-465e-4bc9-aca1-149975060eff-trusted-ca\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.012940 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9d15a09-7646-4c97-ba16-3a1fd2d212ab-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-22wtr\" (UID: \"d9d15a09-7646-4c97-ba16-3a1fd2d212ab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-22wtr" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.013009 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/84658e5e-5af1-49b0-a45a-f7aa7a852a98-default-certificate\") pod \"router-default-5444994796-hd7pw\" (UID: \"84658e5e-5af1-49b0-a45a-f7aa7a852a98\") " pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.013026 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5f915f72-36dd-40f3-a47c-7245505bf997-encryption-config\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.013052 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97cfb89f-0902-42fd-9e4d-a1459f2f2511-audit-dir\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.013069 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.013089 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2393a548-e232-4d77-bbc2-f3daa72338c4-config-volume\") pod \"dns-default-cdlqr\" (UID: \"2393a548-e232-4d77-bbc2-f3daa72338c4\") " pod="openshift-dns/dns-default-cdlqr" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.013119 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.013214 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.013244 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdp4n\" (UniqueName: \"kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-kube-api-access-xdp4n\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.017042 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84658e5e-5af1-49b0-a45a-f7aa7a852a98-service-ca-bundle\") pod \"router-default-5444994796-hd7pw\" (UID: \"84658e5e-5af1-49b0-a45a-f7aa7a852a98\") " pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.020489 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-audit-policies\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.021232 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvrdw\" (UniqueName: \"kubernetes.io/projected/87d9c733-9e98-470e-9c8d-cb8b1817784c-kube-api-access-pvrdw\") pod \"openshift-config-operator-7777fb866f-chklk\" (UID: \"87d9c733-9e98-470e-9c8d-cb8b1817784c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.021401 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac601043-ae94-4202-a446-ed478524075b-registration-dir\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.021779 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-956mn\" (UID: \"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401\") " pod="openshift-marketplace/marketplace-operator-79b997595-956mn" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.023380 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.023738 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97cfb89f-0902-42fd-9e4d-a1459f2f2511-audit-dir\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: E1203 00:08:17.025385 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:17.525362135 +0000 UTC m=+137.667191607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.030732 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-console-config\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.031157 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf"] Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.031565 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-registry-tls\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.031869 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/71d18873-465e-4bc9-aca1-149975060eff-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.032449 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.033737 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.034237 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5f915f72-36dd-40f3-a47c-7245505bf997-audit-policies\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.044487 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-956mn\" (UID: \"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401\") " pod="openshift-marketplace/marketplace-operator-79b997595-956mn" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.046104 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.050323 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/84658e5e-5af1-49b0-a45a-f7aa7a852a98-default-certificate\") pod \"router-default-5444994796-hd7pw\" (UID: \"84658e5e-5af1-49b0-a45a-f7aa7a852a98\") " pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.052899 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.053480 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.054023 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84658e5e-5af1-49b0-a45a-f7aa7a852a98-metrics-certs\") pod \"router-default-5444994796-hd7pw\" (UID: \"84658e5e-5af1-49b0-a45a-f7aa7a852a98\") " pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.054376 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.054638 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f915f72-36dd-40f3-a47c-7245505bf997-serving-cert\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.054647 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5f915f72-36dd-40f3-a47c-7245505bf997-encryption-config\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.054826 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.054884 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-console-serving-cert\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.055192 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5f915f72-36dd-40f3-a47c-7245505bf997-etcd-client\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.055907 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/87d9c733-9e98-470e-9c8d-cb8b1817784c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-chklk\" (UID: \"87d9c733-9e98-470e-9c8d-cb8b1817784c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.056059 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-service-ca\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.056401 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f915f72-36dd-40f3-a47c-7245505bf997-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.057115 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901a390d-2893-422d-ac24-660162a0cc6e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7fqbx\" (UID: \"901a390d-2893-422d-ac24-660162a0cc6e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.057895 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/71d18873-465e-4bc9-aca1-149975060eff-registry-certificates\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.057904 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f915f72-36dd-40f3-a47c-7245505bf997-audit-dir\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.058440 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-oauth-serving-cert\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.059197 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.059924 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71d18873-465e-4bc9-aca1-149975060eff-trusted-ca\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.060488 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-trusted-ca-bundle\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.060630 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/901a390d-2893-422d-ac24-660162a0cc6e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7fqbx\" (UID: \"901a390d-2893-422d-ac24-660162a0cc6e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.061471 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5f915f72-36dd-40f3-a47c-7245505bf997-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.061536 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a683a3a-18b1-4377-9a7a-a00baeed9bc8-trusted-ca\") pod \"ingress-operator-5b745b69d9-nscpq\" (UID: \"0a683a3a-18b1-4377-9a7a-a00baeed9bc8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.063685 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.063830 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.063916 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9d15a09-7646-4c97-ba16-3a1fd2d212ab-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-22wtr\" (UID: \"d9d15a09-7646-4c97-ba16-3a1fd2d212ab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-22wtr" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.063964 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwbt4\" (UniqueName: \"kubernetes.io/projected/97cfb89f-0902-42fd-9e4d-a1459f2f2511-kube-api-access-wwbt4\") pod \"oauth-openshift-558db77b4-4ds94\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.064205 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/84658e5e-5af1-49b0-a45a-f7aa7a852a98-stats-auth\") pod \"router-default-5444994796-hd7pw\" (UID: \"84658e5e-5af1-49b0-a45a-f7aa7a852a98\") " pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.070103 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87d9c733-9e98-470e-9c8d-cb8b1817784c-serving-cert\") pod \"openshift-config-operator-7777fb866f-chklk\" (UID: \"87d9c733-9e98-470e-9c8d-cb8b1817784c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.075768 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/71d18873-465e-4bc9-aca1-149975060eff-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.075950 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a683a3a-18b1-4377-9a7a-a00baeed9bc8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nscpq\" (UID: \"0a683a3a-18b1-4377-9a7a-a00baeed9bc8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.079405 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a683a3a-18b1-4377-9a7a-a00baeed9bc8-metrics-tls\") pod \"ingress-operator-5b745b69d9-nscpq\" (UID: \"0a683a3a-18b1-4377-9a7a-a00baeed9bc8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.083384 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-console-oauth-config\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.090553 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdp4n\" (UniqueName: \"kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-kube-api-access-xdp4n\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: W1203 00:08:17.112512 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f4726ee_a716_44e3_a2d7_cfd634b1b476.slice/crio-055ecf87d114a22cd9b589b4bcb54b7ad1eacb9e2d0ae37dc1bbc06b7ef2d560 WatchSource:0}: Error finding container 055ecf87d114a22cd9b589b4bcb54b7ad1eacb9e2d0ae37dc1bbc06b7ef2d560: Status 404 returned error can't find the container with id 055ecf87d114a22cd9b589b4bcb54b7ad1eacb9e2d0ae37dc1bbc06b7ef2d560 Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.125252 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5llrs\" (UniqueName: \"kubernetes.io/projected/ac601043-ae94-4202-a446-ed478524075b-kube-api-access-5llrs\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.125377 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2393a548-e232-4d77-bbc2-f3daa72338c4-metrics-tls\") pod \"dns-default-cdlqr\" (UID: \"2393a548-e232-4d77-bbc2-f3daa72338c4\") " pod="openshift-dns/dns-default-cdlqr" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.125399 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf-node-bootstrap-token\") pod \"machine-config-server-8447f\" (UID: \"d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf\") " pod="openshift-machine-config-operator/machine-config-server-8447f" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.125447 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf-certs\") pod \"machine-config-server-8447f\" (UID: \"d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf\") " pod="openshift-machine-config-operator/machine-config-server-8447f" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.125475 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b80158bd-99fb-4f61-a9bc-b3f16b0ec9a0-cert\") pod \"ingress-canary-ssdgl\" (UID: \"b80158bd-99fb-4f61-a9bc-b3f16b0ec9a0\") " pod="openshift-ingress-canary/ingress-canary-ssdgl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.125491 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac601043-ae94-4202-a446-ed478524075b-socket-dir\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.125532 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ac601043-ae94-4202-a446-ed478524075b-mountpoint-dir\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.125576 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.125597 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2393a548-e232-4d77-bbc2-f3daa72338c4-config-volume\") pod \"dns-default-cdlqr\" (UID: \"2393a548-e232-4d77-bbc2-f3daa72338c4\") " pod="openshift-dns/dns-default-cdlqr" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.125703 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac601043-ae94-4202-a446-ed478524075b-registration-dir\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.125758 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ac601043-ae94-4202-a446-ed478524075b-csi-data-dir\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.125945 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ac601043-ae94-4202-a446-ed478524075b-plugins-dir\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.126005 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b4hm\" (UniqueName: \"kubernetes.io/projected/2393a548-e232-4d77-bbc2-f3daa72338c4-kube-api-access-8b4hm\") pod \"dns-default-cdlqr\" (UID: \"2393a548-e232-4d77-bbc2-f3daa72338c4\") " pod="openshift-dns/dns-default-cdlqr" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.126036 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqc5k\" (UniqueName: \"kubernetes.io/projected/d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf-kube-api-access-zqc5k\") pod \"machine-config-server-8447f\" (UID: \"d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf\") " pod="openshift-machine-config-operator/machine-config-server-8447f" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.126057 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2cww\" (UniqueName: \"kubernetes.io/projected/b80158bd-99fb-4f61-a9bc-b3f16b0ec9a0-kube-api-access-n2cww\") pod \"ingress-canary-ssdgl\" (UID: \"b80158bd-99fb-4f61-a9bc-b3f16b0ec9a0\") " pod="openshift-ingress-canary/ingress-canary-ssdgl" Dec 03 00:08:17 crc kubenswrapper[4811]: E1203 00:08:17.129589 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:17.629426071 +0000 UTC m=+137.771255543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.130959 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2393a548-e232-4d77-bbc2-f3daa72338c4-config-volume\") pod \"dns-default-cdlqr\" (UID: \"2393a548-e232-4d77-bbc2-f3daa72338c4\") " pod="openshift-dns/dns-default-cdlqr" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.131780 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac601043-ae94-4202-a446-ed478524075b-socket-dir\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.133206 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac601043-ae94-4202-a446-ed478524075b-registration-dir\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.133375 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ac601043-ae94-4202-a446-ed478524075b-mountpoint-dir\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.133796 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8vbw\" (UniqueName: \"kubernetes.io/projected/0a683a3a-18b1-4377-9a7a-a00baeed9bc8-kube-api-access-p8vbw\") pod \"ingress-operator-5b745b69d9-nscpq\" (UID: \"0a683a3a-18b1-4377-9a7a-a00baeed9bc8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.134611 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvrdw\" (UniqueName: \"kubernetes.io/projected/87d9c733-9e98-470e-9c8d-cb8b1817784c-kube-api-access-pvrdw\") pod \"openshift-config-operator-7777fb866f-chklk\" (UID: \"87d9c733-9e98-470e-9c8d-cb8b1817784c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.134966 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ac601043-ae94-4202-a446-ed478524075b-plugins-dir\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.133820 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ac601043-ae94-4202-a446-ed478524075b-csi-data-dir\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.138188 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b80158bd-99fb-4f61-a9bc-b3f16b0ec9a0-cert\") pod \"ingress-canary-ssdgl\" (UID: \"b80158bd-99fb-4f61-a9bc-b3f16b0ec9a0\") " pod="openshift-ingress-canary/ingress-canary-ssdgl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.148115 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf-certs\") pod \"machine-config-server-8447f\" (UID: \"d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf\") " pod="openshift-machine-config-operator/machine-config-server-8447f" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.149230 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2393a548-e232-4d77-bbc2-f3daa72338c4-metrics-tls\") pod \"dns-default-cdlqr\" (UID: \"2393a548-e232-4d77-bbc2-f3daa72338c4\") " pod="openshift-dns/dns-default-cdlqr" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.149298 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf-node-bootstrap-token\") pod \"machine-config-server-8447f\" (UID: \"d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf\") " pod="openshift-machine-config-operator/machine-config-server-8447f" Dec 03 00:08:17 crc kubenswrapper[4811]: W1203 00:08:17.158008 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod513c0923_f361_46af_8761_b4d809c1b287.slice/crio-aed00d6457238ade567f911a4f6c5d19967880c4babfbe5b46985f20bb5122e2 WatchSource:0}: Error finding container aed00d6457238ade567f911a4f6c5d19967880c4babfbe5b46985f20bb5122e2: Status 404 returned error can't find the container with id aed00d6457238ade567f911a4f6c5d19967880c4babfbe5b46985f20bb5122e2 Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.177024 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gvsv\" (UniqueName: \"kubernetes.io/projected/84658e5e-5af1-49b0-a45a-f7aa7a852a98-kube-api-access-7gvsv\") pod \"router-default-5444994796-hd7pw\" (UID: \"84658e5e-5af1-49b0-a45a-f7aa7a852a98\") " pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.192773 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wpzs6"] Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.205135 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.205715 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tswgx\" (UniqueName: \"kubernetes.io/projected/901a390d-2893-422d-ac24-660162a0cc6e-kube-api-access-tswgx\") pod \"openshift-controller-manager-operator-756b6f6bc6-7fqbx\" (UID: \"901a390d-2893-422d-ac24-660162a0cc6e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.221155 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xgmp\" (UniqueName: \"kubernetes.io/projected/cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d-kube-api-access-8xgmp\") pod \"console-f9d7485db-9jgcw\" (UID: \"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d\") " pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.224383 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2mbnl"] Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.227789 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:17 crc kubenswrapper[4811]: E1203 00:08:17.228303 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:17.728280751 +0000 UTC m=+137.870110223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.233582 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.235962 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.241795 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r99b\" (UniqueName: \"kubernetes.io/projected/d9d15a09-7646-4c97-ba16-3a1fd2d212ab-kube-api-access-6r99b\") pod \"multus-admission-controller-857f4d67dd-22wtr\" (UID: \"d9d15a09-7646-4c97-ba16-3a1fd2d212ab\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-22wtr" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.251982 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-bound-sa-token\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.252582 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.294316 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx"] Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.322455 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9qzh\" (UniqueName: \"kubernetes.io/projected/5f915f72-36dd-40f3-a47c-7245505bf997-kube-api-access-m9qzh\") pod \"apiserver-7bbb656c7d-4fkh6\" (UID: \"5f915f72-36dd-40f3-a47c-7245505bf997\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.330283 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: E1203 00:08:17.330639 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:17.830625451 +0000 UTC m=+137.972454923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.335929 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b8mqn"] Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.339509 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmfbq\" (UniqueName: \"kubernetes.io/projected/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-kube-api-access-mmfbq\") pod \"marketplace-operator-79b997595-956mn\" (UID: \"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401\") " pod="openshift-marketplace/marketplace-operator-79b997595-956mn" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.351895 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5llrs\" (UniqueName: \"kubernetes.io/projected/ac601043-ae94-4202-a446-ed478524075b-kube-api-access-5llrs\") pod \"csi-hostpathplugin-zvgcl\" (UID: \"ac601043-ae94-4202-a446-ed478524075b\") " pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.354369 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b4hm\" (UniqueName: \"kubernetes.io/projected/2393a548-e232-4d77-bbc2-f3daa72338c4-kube-api-access-8b4hm\") pod \"dns-default-cdlqr\" (UID: \"2393a548-e232-4d77-bbc2-f3daa72338c4\") " pod="openshift-dns/dns-default-cdlqr" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.383473 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2cww\" (UniqueName: \"kubernetes.io/projected/b80158bd-99fb-4f61-a9bc-b3f16b0ec9a0-kube-api-access-n2cww\") pod \"ingress-canary-ssdgl\" (UID: \"b80158bd-99fb-4f61-a9bc-b3f16b0ec9a0\") " pod="openshift-ingress-canary/ingress-canary-ssdgl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.385369 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqc5k\" (UniqueName: \"kubernetes.io/projected/d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf-kube-api-access-zqc5k\") pod \"machine-config-server-8447f\" (UID: \"d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf\") " pod="openshift-machine-config-operator/machine-config-server-8447f" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.404999 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.423119 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-22wtr" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.431942 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:17 crc kubenswrapper[4811]: E1203 00:08:17.433627 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:17.933580247 +0000 UTC m=+138.075409719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.433669 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: E1203 00:08:17.437065 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:17.937046816 +0000 UTC m=+138.078876288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.453982 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.488693 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-956mn" Dec 03 00:08:17 crc kubenswrapper[4811]: W1203 00:08:17.520453 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c1aa648_c5db_406f_9d3e_bc1ab95d29c4.slice/crio-9c3909e843a5beff12872ddf9bbbac286b42e6a050bc9689d17b7fb18396412b WatchSource:0}: Error finding container 9c3909e843a5beff12872ddf9bbbac286b42e6a050bc9689d17b7fb18396412b: Status 404 returned error can't find the container with id 9c3909e843a5beff12872ddf9bbbac286b42e6a050bc9689d17b7fb18396412b Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.539651 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:17 crc kubenswrapper[4811]: E1203 00:08:17.539955 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:18.039942501 +0000 UTC m=+138.181771973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.564520 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.581804 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-cdlqr" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.604127 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.615886 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8447f" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.616439 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ssdgl" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.647949 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: E1203 00:08:17.648449 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:18.148425648 +0000 UTC m=+138.290255120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.696012 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vscpq"] Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.697739 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5j74h"] Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.702015 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj"] Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.719081 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm"] Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.749532 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:17 crc kubenswrapper[4811]: E1203 00:08:17.749734 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:18.249706142 +0000 UTC m=+138.391535614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.750115 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: E1203 00:08:17.750503 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:18.250494181 +0000 UTC m=+138.392323653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.812391 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm"] Dec 03 00:08:17 crc kubenswrapper[4811]: W1203 00:08:17.836106 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod748ff627_bc26_49ba_a0bd_f970f18f216f.slice/crio-bca0c78825d697b42c2bf71c2ce073ffe83a58bfbfe1a17d89b0fb8d8cd9ea91 WatchSource:0}: Error finding container bca0c78825d697b42c2bf71c2ce073ffe83a58bfbfe1a17d89b0fb8d8cd9ea91: Status 404 returned error can't find the container with id bca0c78825d697b42c2bf71c2ce073ffe83a58bfbfe1a17d89b0fb8d8cd9ea91 Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.845395 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zc8fl"] Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.846652 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29412000-7zxph"] Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.851480 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:17 crc kubenswrapper[4811]: E1203 00:08:17.851699 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:18.351660421 +0000 UTC m=+138.493489893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.851862 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:17 crc kubenswrapper[4811]: E1203 00:08:17.852572 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:18.352543294 +0000 UTC m=+138.494372776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.858666 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sstkc" podStartSLOduration=119.85863184 podStartE2EDuration="1m59.85863184s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:17.85706537 +0000 UTC m=+137.998894842" watchObservedRunningTime="2025-12-03 00:08:17.85863184 +0000 UTC m=+138.000461312" Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.875834 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pstnw"] Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.954326 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:17 crc kubenswrapper[4811]: E1203 00:08:17.955523 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:18.45548291 +0000 UTC m=+138.597312382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:17 crc kubenswrapper[4811]: I1203 00:08:17.983700 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.013971 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" event={"ID":"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4","Type":"ContainerStarted","Data":"9c3909e843a5beff12872ddf9bbbac286b42e6a050bc9689d17b7fb18396412b"} Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.017004 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" event={"ID":"fc347b57-7a13-480c-b630-f2486ce233fc","Type":"ContainerStarted","Data":"90d1a096b7f6e9e72106f5cdf7372be9b87e5ac3d7766f9fbf886b70bd74bf72"} Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.017121 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" event={"ID":"fc347b57-7a13-480c-b630-f2486ce233fc","Type":"ContainerStarted","Data":"15ea85ad76ec5b37e8e1d449fe84c0e22d1963d63f56f48ace7b1e56888b563e"} Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.019605 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" event={"ID":"c85f3795-cbc4-46fe-ba79-b68904df2de3","Type":"ContainerStarted","Data":"1cf00aa8bcf954016680c16aeafe5837a1ddc6e823d7ba8b1aa8b7c8d9a6e3ba"} Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.021582 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" event={"ID":"513c0923-f361-46af-8761-b4d809c1b287","Type":"ContainerStarted","Data":"aed00d6457238ade567f911a4f6c5d19967880c4babfbe5b46985f20bb5122e2"} Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.024103 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" event={"ID":"6ee2362a-613c-4927-9525-3d7f87167ab7","Type":"ContainerStarted","Data":"77d35848624013f3b21d949600c9977d496ae2bb61ef08176086b0cfd5f71c33"} Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.029791 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx" event={"ID":"cb5b4076-512d-4f19-a773-36cd7a54a8a4","Type":"ContainerStarted","Data":"6f953f2a4c5980ebfd8ee99382b71c8ed743830b2a2adf5552fa34962abae26c"} Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.031094 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5j74h" event={"ID":"6531f918-708c-4bb8-a418-d09dfb7a8b3a","Type":"ContainerStarted","Data":"3a7105decb52160c81afc22b23e941a80afa96dc1f961c65e00eef0505e68427"} Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.033171 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hd7pw" event={"ID":"84658e5e-5af1-49b0-a45a-f7aa7a852a98","Type":"ContainerStarted","Data":"31ea19d8409aafabf7000bb51cbfd7dff6ff86244b305297357553c1892f7b95"} Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.034905 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm" event={"ID":"4a505c1c-0ab7-4920-b43e-6475fae9b32b","Type":"ContainerStarted","Data":"fab469e3b204e55ad8cad9b47dd052b7857a9deb1d2ba1bd38a85364d3e417f5"} Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.038719 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" event={"ID":"abf21905-6e83-49e0-b238-dae340d0bcca","Type":"ContainerStarted","Data":"492f64cac5b993e336f86ff0b553c49d259ddadd77247b54c57cc4c29f3e7ed6"} Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.056443 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:18 crc kubenswrapper[4811]: E1203 00:08:18.056899 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:18.556880676 +0000 UTC m=+138.698710148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.060577 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wpzs6" event={"ID":"4a68baef-4258-4aea-b775-172682cbf844","Type":"ContainerStarted","Data":"42089d8367a51c3dda3b0d652aa30755d5430751a98109d018446d29fcdbf96b"} Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.069462 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf" event={"ID":"6f4726ee-a716-44e3-a2d7-cfd634b1b476","Type":"ContainerStarted","Data":"055ecf87d114a22cd9b589b4bcb54b7ad1eacb9e2d0ae37dc1bbc06b7ef2d560"} Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.071788 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-b8mqn" event={"ID":"b917a512-4630-408e-9d9c-0ca6808d4a5b","Type":"ContainerStarted","Data":"51439774ae3f5baf1ab75826185f6b1773595bdc93d2dee6ab5caf8771a59978"} Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.073389 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vscpq" event={"ID":"748ff627-bc26-49ba-a0bd-f970f18f216f","Type":"ContainerStarted","Data":"bca0c78825d697b42c2bf71c2ce073ffe83a58bfbfe1a17d89b0fb8d8cd9ea91"} Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.098099 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r897j" podStartSLOduration=120.09806492 podStartE2EDuration="2m0.09806492s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:18.08830127 +0000 UTC m=+138.230130752" watchObservedRunningTime="2025-12-03 00:08:18.09806492 +0000 UTC m=+138.239894392" Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.161167 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:18 crc kubenswrapper[4811]: E1203 00:08:18.163302 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:18.66327326 +0000 UTC m=+138.805102732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.174243 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-vjx2n" podStartSLOduration=120.174216821 podStartE2EDuration="2m0.174216821s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:18.17344732 +0000 UTC m=+138.315276792" watchObservedRunningTime="2025-12-03 00:08:18.174216821 +0000 UTC m=+138.316046293" Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.260949 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.264925 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:18 crc kubenswrapper[4811]: E1203 00:08:18.266791 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:18.76675413 +0000 UTC m=+138.908583602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:18 crc kubenswrapper[4811]: W1203 00:08:18.281929 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5716cc3_cd8a_40c5_82b2_7c1389e8eeaf.slice/crio-a1ae801755bbda80f1574b743cae7d5a959b1cb817b81f957d7961ef89c47964 WatchSource:0}: Error finding container a1ae801755bbda80f1574b743cae7d5a959b1cb817b81f957d7961ef89c47964: Status 404 returned error can't find the container with id a1ae801755bbda80f1574b743cae7d5a959b1cb817b81f957d7961ef89c47964 Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.299987 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j64d2"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.320200 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9jgcw"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.325083 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6jkt"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.327654 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wbv94"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.367532 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:18 crc kubenswrapper[4811]: E1203 00:08:18.369749 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:18.869723936 +0000 UTC m=+139.011553408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.374695 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" podStartSLOduration=120.374677503 podStartE2EDuration="2m0.374677503s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:18.370098466 +0000 UTC m=+138.511927938" watchObservedRunningTime="2025-12-03 00:08:18.374677503 +0000 UTC m=+138.516506995" Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.458240 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.478130 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:18 crc kubenswrapper[4811]: E1203 00:08:18.478609 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:18.978564203 +0000 UTC m=+139.120393675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.517675 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-chklk"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.534496 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4ds94"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.566454 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.568779 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-22wtr"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.569488 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.580640 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:18 crc kubenswrapper[4811]: E1203 00:08:18.581038 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:19.081022476 +0000 UTC m=+139.222851948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.660210 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ssdgl"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.680749 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-956mn"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.682340 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:18 crc kubenswrapper[4811]: E1203 00:08:18.682831 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:19.182817243 +0000 UTC m=+139.324646715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.687244 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-cdlqr"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.725877 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq"] Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.742013 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zvgcl"] Dec 03 00:08:18 crc kubenswrapper[4811]: W1203 00:08:18.757046 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87d9c733_9e98_470e_9c8d_cb8b1817784c.slice/crio-4f724d32a0b6c0ee8ff6baecef503300591162aefefdf64d025f5a0879a599d6 WatchSource:0}: Error finding container 4f724d32a0b6c0ee8ff6baecef503300591162aefefdf64d025f5a0879a599d6: Status 404 returned error can't find the container with id 4f724d32a0b6c0ee8ff6baecef503300591162aefefdf64d025f5a0879a599d6 Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.773905 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-78j8h" podStartSLOduration=120.773878474 podStartE2EDuration="2m0.773878474s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:18.771944215 +0000 UTC m=+138.913773687" watchObservedRunningTime="2025-12-03 00:08:18.773878474 +0000 UTC m=+138.915707956" Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.783783 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:18 crc kubenswrapper[4811]: E1203 00:08:18.784140 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:19.284124966 +0000 UTC m=+139.425954438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:18 crc kubenswrapper[4811]: W1203 00:08:18.839961 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod901a390d_2893_422d_ac24_660162a0cc6e.slice/crio-4661183ee2d107b85f134cfa07a91442c3a6be92e80a25d903ae8ecca7f9d8c5 WatchSource:0}: Error finding container 4661183ee2d107b85f134cfa07a91442c3a6be92e80a25d903ae8ecca7f9d8c5: Status 404 returned error can't find the container with id 4661183ee2d107b85f134cfa07a91442c3a6be92e80a25d903ae8ecca7f9d8c5 Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.885130 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:18 crc kubenswrapper[4811]: E1203 00:08:18.885599 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:19.385576524 +0000 UTC m=+139.527405996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:18 crc kubenswrapper[4811]: I1203 00:08:18.985604 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:18 crc kubenswrapper[4811]: E1203 00:08:18.985979 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:19.485965305 +0000 UTC m=+139.627794777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.011550 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w974d" podStartSLOduration=122.011532349 podStartE2EDuration="2m2.011532349s" podCreationTimestamp="2025-12-03 00:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:19.010956464 +0000 UTC m=+139.152785926" watchObservedRunningTime="2025-12-03 00:08:19.011532349 +0000 UTC m=+139.153361811" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.038245 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-rhl4d" podStartSLOduration=122.038226283 podStartE2EDuration="2m2.038226283s" podCreationTimestamp="2025-12-03 00:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:19.037067413 +0000 UTC m=+139.178896885" watchObservedRunningTime="2025-12-03 00:08:19.038226283 +0000 UTC m=+139.180055755" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.086728 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:19 crc kubenswrapper[4811]: E1203 00:08:19.087540 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:19.587528015 +0000 UTC m=+139.729357477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.181900 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" event={"ID":"5f915f72-36dd-40f3-a47c-7245505bf997","Type":"ContainerStarted","Data":"7fcff1fdd14b149af6bb667ebb51f3d278f1691ba285dd09fca424814aa6eb77"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.188028 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:19 crc kubenswrapper[4811]: E1203 00:08:19.189155 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:19.689118686 +0000 UTC m=+139.830948158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.192192 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5j74h" event={"ID":"6531f918-708c-4bb8-a418-d09dfb7a8b3a","Type":"ContainerStarted","Data":"94c480c3f15ee7a3063c9b6cf9d96a18239f280432de73a0fb68f47b71559f09"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.192483 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5j74h" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.192550 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:19 crc kubenswrapper[4811]: E1203 00:08:19.192868 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:19.692853681 +0000 UTC m=+139.834683153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.200459 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-5j74h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.200545 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5j74h" podUID="6531f918-708c-4bb8-a418-d09dfb7a8b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.212846 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29412000-7zxph" event={"ID":"ac1cc52b-99ba-456c-8ffe-cebbc7d4e827","Type":"ContainerStarted","Data":"9f12ddca80b89cadb1233f88420b3ed6cd3218c4a324a1b928678736645915aa"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.212906 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29412000-7zxph" event={"ID":"ac1cc52b-99ba-456c-8ffe-cebbc7d4e827","Type":"ContainerStarted","Data":"8bc65c26eff30c3a4bbdbd5a425f79722814eb2fdd4258d55e7abb1d20712365"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.248963 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" podStartSLOduration=121.248944417 podStartE2EDuration="2m1.248944417s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:19.058856691 +0000 UTC m=+139.200686163" watchObservedRunningTime="2025-12-03 00:08:19.248944417 +0000 UTC m=+139.390773889" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.252138 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf" event={"ID":"6f4726ee-a716-44e3-a2d7-cfd634b1b476","Type":"ContainerStarted","Data":"9ab81f0d1cd016abf54df7479a66485406e1812bf3195c267358fe4c8fbd72d5"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.255653 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" event={"ID":"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4","Type":"ContainerStarted","Data":"4a0574a7014209775aa80e1a7f98a232c489ec2bab58a0b88bab8764ba72a24e"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.256188 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.281080 4811 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2mbnl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.281149 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" podUID="4c1aa648-c5db-406f-9d3e-bc1ab95d29c4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.295054 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.296420 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" event={"ID":"513c0923-f361-46af-8761-b4d809c1b287","Type":"ContainerStarted","Data":"75bde0e9ce85554a7410822f6cf0a2dcd3f729fc9479b86eab4715524716c2db"} Dec 03 00:08:19 crc kubenswrapper[4811]: E1203 00:08:19.298025 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:19.797992493 +0000 UTC m=+139.939821955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.315779 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5j74h" podStartSLOduration=121.315751638 podStartE2EDuration="2m1.315751638s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:19.258458561 +0000 UTC m=+139.400288033" watchObservedRunningTime="2025-12-03 00:08:19.315751638 +0000 UTC m=+139.457581110" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.345342 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx" event={"ID":"901a390d-2893-422d-ac24-660162a0cc6e","Type":"ContainerStarted","Data":"4661183ee2d107b85f134cfa07a91442c3a6be92e80a25d903ae8ecca7f9d8c5"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.352241 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" podStartSLOduration=121.352226282 podStartE2EDuration="2m1.352226282s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:19.317280477 +0000 UTC m=+139.459109959" watchObservedRunningTime="2025-12-03 00:08:19.352226282 +0000 UTC m=+139.494055754" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.352696 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29412000-7zxph" podStartSLOduration=121.352692024 podStartE2EDuration="2m1.352692024s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:19.352027647 +0000 UTC m=+139.493857119" watchObservedRunningTime="2025-12-03 00:08:19.352692024 +0000 UTC m=+139.494521496" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.367572 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6jkt" event={"ID":"72e13ac0-ed91-41a8-8df4-ca88a2838fd3","Type":"ContainerStarted","Data":"6a842e8617682db23b5068c55fdecb7773cb6dbb0540eaf0815ae2b08fe7665c"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.403659 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vzhhf" podStartSLOduration=121.403623868 podStartE2EDuration="2m1.403623868s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:19.379030048 +0000 UTC m=+139.520859520" watchObservedRunningTime="2025-12-03 00:08:19.403623868 +0000 UTC m=+139.545453340" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.430740 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:19 crc kubenswrapper[4811]: E1203 00:08:19.431348 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:19.931333358 +0000 UTC m=+140.073162830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.467277 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ssdgl" event={"ID":"b80158bd-99fb-4f61-a9bc-b3f16b0ec9a0","Type":"ContainerStarted","Data":"c0fc1c3395956abdfbb125c5bb4e47ea4fb3a27350d9dc38949949deceb21963"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.474545 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq" event={"ID":"c8dc919c-6856-4c26-a76d-3ba3212fe7c3","Type":"ContainerStarted","Data":"651736d01a0d7ffe72aa125fd21bdd5f53a89ae7dea30b105be2ac3b799ee4bb"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.512625 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" event={"ID":"e538a4e7-dd56-48db-828e-49af39ac5def","Type":"ContainerStarted","Data":"8bab72d4c32b5f24be31f8e5310aa0026a99c4fa1fa1a13fb6e82c38f8971ad4"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.524818 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" event={"ID":"97cfb89f-0902-42fd-9e4d-a1459f2f2511","Type":"ContainerStarted","Data":"cb3efb3ae9c957845b2b027219738842990d9d77d793a223df0aff8f2c81985a"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.533989 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:19 crc kubenswrapper[4811]: E1203 00:08:19.534545 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:20.034516089 +0000 UTC m=+140.176345561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.555921 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-956mn" event={"ID":"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401","Type":"ContainerStarted","Data":"557d778a468e8212c985e7ef004ae55aded7f763531e6d7a01b45a0e8d6c8065"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.601472 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" event={"ID":"abf21905-6e83-49e0-b238-dae340d0bcca","Type":"ContainerStarted","Data":"2720dc98d2e78e1cdd085d064232c8bdc305584351657316ba3dc55edf30d289"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.602934 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.611193 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" event={"ID":"87d9c733-9e98-470e-9c8d-cb8b1817784c","Type":"ContainerStarted","Data":"4f724d32a0b6c0ee8ff6baecef503300591162aefefdf64d025f5a0879a599d6"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.622540 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" event={"ID":"ac601043-ae94-4202-a446-ed478524075b","Type":"ContainerStarted","Data":"a531dbacd2c087be623ac471e20bd76ef1ac45b1b6ab098e82e8ea0165ab0992"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.647745 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wbv94" event={"ID":"d5580144-bfbe-490a-a448-2c225be80621","Type":"ContainerStarted","Data":"7901fe7ddeb2d15d7a6acbc7a2ff615e155e5aaf8902a41d4a0152c5c87eb1bd"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.648115 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:19 crc kubenswrapper[4811]: E1203 00:08:19.649308 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:20.149294558 +0000 UTC m=+140.291124030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.654196 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.656286 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" event={"ID":"0a683a3a-18b1-4377-9a7a-a00baeed9bc8","Type":"ContainerStarted","Data":"fae8b5294fcacec98f920f203da7176299439ba030bc72c9798c0474a6262a04"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.659402 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-22wtr" event={"ID":"d9d15a09-7646-4c97-ba16-3a1fd2d212ab","Type":"ContainerStarted","Data":"1cf46a8111b4b293e4ca35b0ac5632ba8a3d80f7f746861cc94a8695c7d45832"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.666805 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9jgcw" event={"ID":"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d","Type":"ContainerStarted","Data":"8e5f464a7e791b12ca8abbf627882c08d28392ae2ac54feff00103913c1e32ee"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.672374 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pstnw" event={"ID":"9746028f-d836-467c-91d8-4530f09ac665","Type":"ContainerStarted","Data":"68a8a209611bdcc39d30d11611bc35ef3296435f59bbc94fd5d48dcc3f339c9e"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.694701 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ldphj" podStartSLOduration=121.694682081 podStartE2EDuration="2m1.694682081s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:19.647508513 +0000 UTC m=+139.789337985" watchObservedRunningTime="2025-12-03 00:08:19.694682081 +0000 UTC m=+139.836511553" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.724902 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" event={"ID":"7f7cb9ef-f206-4b06-918f-1ac96967e618","Type":"ContainerStarted","Data":"6050e4569a9894263417e20d70aee332f894c3dfe6c45d1c9ad32946acdb6085"} Dec 03 00:08:19 crc kubenswrapper[4811]: E1203 00:08:19.756534 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:20.256503033 +0000 UTC m=+140.398332505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.756391 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.757057 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:19 crc kubenswrapper[4811]: E1203 00:08:19.757551 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:20.25754068 +0000 UTC m=+140.399370152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.769400 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zc8fl" event={"ID":"4bdd961a-4364-4a42-b398-17570f149c42","Type":"ContainerStarted","Data":"1be66278e88649b10bd9a2ce414f3e6144f2d4913c04d60f8ce06a5286b0af81"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.769485 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zc8fl" event={"ID":"4bdd961a-4364-4a42-b398-17570f149c42","Type":"ContainerStarted","Data":"00f180520454dd45feaf356cfba3349258e1d87eb5c89785b0eebc43cc939839"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.777767 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx" event={"ID":"cb5b4076-512d-4f19-a773-36cd7a54a8a4","Type":"ContainerStarted","Data":"4f6c2dda6bf3211730a99d8650227106a3be1e9a95d81d3da71eba12448fc6f8"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.794192 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-zc8fl" podStartSLOduration=121.794150587 podStartE2EDuration="2m1.794150587s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:19.789996661 +0000 UTC m=+139.931826133" watchObservedRunningTime="2025-12-03 00:08:19.794150587 +0000 UTC m=+139.935980079" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.800208 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wpzs6" event={"ID":"4a68baef-4258-4aea-b775-172682cbf844","Type":"ContainerStarted","Data":"2d81dc64e39467b1fbc6aa534828a0f5afde73c56c52f36942e035fa4a76b675"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.815633 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vscpq" event={"ID":"748ff627-bc26-49ba-a0bd-f970f18f216f","Type":"ContainerStarted","Data":"1d07786c613308a4744f18e2e837400b5aea2065e444a0b90e875f8d51002517"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.820463 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gwmbx" podStartSLOduration=121.820452261 podStartE2EDuration="2m1.820452261s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:19.820017069 +0000 UTC m=+139.961846531" watchObservedRunningTime="2025-12-03 00:08:19.820452261 +0000 UTC m=+139.962281733" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.842197 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" event={"ID":"2735423a-1c0a-489c-ada2-f5ba5aa58397","Type":"ContainerStarted","Data":"5bffe0b85613de29bac5f9d4a785015946d0b8b5b2e3a64542a152b07beaeac8"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.842282 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" event={"ID":"2735423a-1c0a-489c-ada2-f5ba5aa58397","Type":"ContainerStarted","Data":"c8bc9f6e68e6102d1f9cdf0cc9cb8f1932839b1c3de677d5df2c230d3a8c454e"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.864441 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.864671 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm" event={"ID":"4a505c1c-0ab7-4920-b43e-6475fae9b32b","Type":"ContainerStarted","Data":"4390c7e4fbf523691bcc1133f76990c9d73ccd8da339a07a571dabe8e2dc8c7e"} Dec 03 00:08:19 crc kubenswrapper[4811]: E1203 00:08:19.865980 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:20.365952716 +0000 UTC m=+140.507782188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.891594 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-b8mqn" event={"ID":"b917a512-4630-408e-9d9c-0ca6808d4a5b","Type":"ContainerStarted","Data":"e2ad43c9045af6484fbec09c80b5bba32dc4635fd8c6724c3513f8289d1c7e89"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.892995 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.898206 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wpzs6" podStartSLOduration=121.898189211 podStartE2EDuration="2m1.898189211s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:19.854828481 +0000 UTC m=+139.996657953" watchObservedRunningTime="2025-12-03 00:08:19.898189211 +0000 UTC m=+140.040018683" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.911883 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vscpq" podStartSLOduration=121.911858501 podStartE2EDuration="2m1.911858501s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:19.897418431 +0000 UTC m=+140.039247903" watchObservedRunningTime="2025-12-03 00:08:19.911858501 +0000 UTC m=+140.053687973" Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.932285 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cdlqr" event={"ID":"2393a548-e232-4d77-bbc2-f3daa72338c4","Type":"ContainerStarted","Data":"c7c718d05217bbba46fd093582f14f2d3f4d5a48b48cc43ff1860587b69ad347"} Dec 03 00:08:19 crc kubenswrapper[4811]: I1203 00:08:19.982989 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:19 crc kubenswrapper[4811]: E1203 00:08:19.983719 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:20.48370344 +0000 UTC m=+140.625532912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.022018 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" event={"ID":"6ee2362a-613c-4927-9525-3d7f87167ab7","Type":"ContainerStarted","Data":"111be1513e4adbd2a2320b84e8e07a180cb4e62c8a7b8b464a8fa91eacaadea4"} Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.056277 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-b8mqn" podStartSLOduration=122.056246808 podStartE2EDuration="2m2.056246808s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:19.942138816 +0000 UTC m=+140.083968288" watchObservedRunningTime="2025-12-03 00:08:20.056246808 +0000 UTC m=+140.198076280" Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.057523 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-7bqms" podStartSLOduration=122.057515491 podStartE2EDuration="2m2.057515491s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:20.054049291 +0000 UTC m=+140.195878763" watchObservedRunningTime="2025-12-03 00:08:20.057515491 +0000 UTC m=+140.199344963" Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.099507 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:20 crc kubenswrapper[4811]: E1203 00:08:20.100681 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:20.600643495 +0000 UTC m=+140.742473097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.106070 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j64d2" event={"ID":"bbf9b6c9-1d96-4b24-b1d5-a3e7034af2c8","Type":"ContainerStarted","Data":"9b8238d864528a6c18cacf4e9e525c8c106161edd66328bfd2664beec53d907c"} Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.180582 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hd7pw" event={"ID":"84658e5e-5af1-49b0-a45a-f7aa7a852a98","Type":"ContainerStarted","Data":"75387bef77ced6246157ff10243cb3d4f14aa8f4de6b51ccd54f76e9db3d44ac"} Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.201798 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:20 crc kubenswrapper[4811]: E1203 00:08:20.202177 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:20.702163784 +0000 UTC m=+140.843993266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.251493 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8447f" event={"ID":"d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf","Type":"ContainerStarted","Data":"a1ae801755bbda80f1574b743cae7d5a959b1cb817b81f957d7961ef89c47964"} Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.256651 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.282818 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:20 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:20 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:20 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.282920 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.308693 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:20 crc kubenswrapper[4811]: E1203 00:08:20.310706 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:20.810680243 +0000 UTC m=+140.952509715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.438133 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:20 crc kubenswrapper[4811]: E1203 00:08:20.438850 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:20.938837774 +0000 UTC m=+141.080667246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.552430 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:20 crc kubenswrapper[4811]: E1203 00:08:20.552915 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:21.052896274 +0000 UTC m=+141.194725746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.656466 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:20 crc kubenswrapper[4811]: E1203 00:08:20.657182 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:21.157167574 +0000 UTC m=+141.298997046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.767892 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:20 crc kubenswrapper[4811]: E1203 00:08:20.768306 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:21.268289558 +0000 UTC m=+141.410119040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.826027 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hd7pw" podStartSLOduration=122.826006697 podStartE2EDuration="2m2.826006697s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:20.824381485 +0000 UTC m=+140.966210977" watchObservedRunningTime="2025-12-03 00:08:20.826006697 +0000 UTC m=+140.967836169" Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.827030 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8447f" podStartSLOduration=6.827023322 podStartE2EDuration="6.827023322s" podCreationTimestamp="2025-12-03 00:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:20.765157388 +0000 UTC m=+140.906986860" watchObservedRunningTime="2025-12-03 00:08:20.827023322 +0000 UTC m=+140.968852794" Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.877471 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:20 crc kubenswrapper[4811]: E1203 00:08:20.878243 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:21.378228794 +0000 UTC m=+141.520058266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.893371 4811 patch_prober.go:28] interesting pod/console-operator-58897d9998-b8mqn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.893451 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-b8mqn" podUID="b917a512-4630-408e-9d9c-0ca6808d4a5b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 00:08:20 crc kubenswrapper[4811]: I1203 00:08:20.980182 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:20 crc kubenswrapper[4811]: E1203 00:08:20.980523 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:21.480510012 +0000 UTC m=+141.622339474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.081621 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:21 crc kubenswrapper[4811]: E1203 00:08:21.082051 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:21.582038123 +0000 UTC m=+141.723867595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.102992 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ms8z7"] Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.124984 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.129706 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.138574 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ms8z7"] Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.184135 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:21 crc kubenswrapper[4811]: E1203 00:08:21.184520 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:21.684505766 +0000 UTC m=+141.826335238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:21 crc kubenswrapper[4811]: E1203 00:08:21.227321 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87d9c733_9e98_470e_9c8d_cb8b1817784c.slice/crio-755dce1b52a476c8afe68b3834e294c868e578cff1ae69bb7f703bf2c20ddf5a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5580144_bfbe_490a_a448_2c225be80621.slice/crio-0531812d5e1f3af095cd77ca9e017ebf381fd10702539348afec135a6acbc825.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87d9c733_9e98_470e_9c8d_cb8b1817784c.slice/crio-conmon-755dce1b52a476c8afe68b3834e294c868e578cff1ae69bb7f703bf2c20ddf5a.scope\": RecentStats: unable to find data in memory cache]" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.271052 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:21 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:21 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:21 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.271124 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.258248 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sfsfv"] Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.272737 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.284687 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.285471 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43747cdd-50ef-43df-b98d-a4d855984bb3-catalog-content\") pod \"community-operators-ms8z7\" (UID: \"43747cdd-50ef-43df-b98d-a4d855984bb3\") " pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.285551 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v24ff\" (UniqueName: \"kubernetes.io/projected/43747cdd-50ef-43df-b98d-a4d855984bb3-kube-api-access-v24ff\") pod \"community-operators-ms8z7\" (UID: \"43747cdd-50ef-43df-b98d-a4d855984bb3\") " pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.285576 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43747cdd-50ef-43df-b98d-a4d855984bb3-utilities\") pod \"community-operators-ms8z7\" (UID: \"43747cdd-50ef-43df-b98d-a4d855984bb3\") " pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.285630 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:21 crc kubenswrapper[4811]: E1203 00:08:21.285916 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:21.785905412 +0000 UTC m=+141.927734884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.336636 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfsfv"] Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.337697 4811 generic.go:334] "Generic (PLEG): container finished" podID="d5580144-bfbe-490a-a448-2c225be80621" containerID="0531812d5e1f3af095cd77ca9e017ebf381fd10702539348afec135a6acbc825" exitCode=0 Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.337785 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wbv94" event={"ID":"d5580144-bfbe-490a-a448-2c225be80621","Type":"ContainerDied","Data":"0531812d5e1f3af095cd77ca9e017ebf381fd10702539348afec135a6acbc825"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.389496 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:21 crc kubenswrapper[4811]: E1203 00:08:21.391876 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:21.891846724 +0000 UTC m=+142.033676196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.424027 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" event={"ID":"0a683a3a-18b1-4377-9a7a-a00baeed9bc8","Type":"ContainerStarted","Data":"2148cc64d8565e5fb47b04ac4f3a60e7354a615c9e91de111e0ca4187e3220b5"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.424079 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" event={"ID":"0a683a3a-18b1-4377-9a7a-a00baeed9bc8","Type":"ContainerStarted","Data":"4364f946463307cd8e109f955dc668e2ae3cedb66f6710ac60e7524a8555574e"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.445356 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02c207e-d2f6-4c42-8e80-8967413395c0-utilities\") pod \"certified-operators-sfsfv\" (UID: \"e02c207e-d2f6-4c42-8e80-8967413395c0\") " pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.445554 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02c207e-d2f6-4c42-8e80-8967413395c0-catalog-content\") pod \"certified-operators-sfsfv\" (UID: \"e02c207e-d2f6-4c42-8e80-8967413395c0\") " pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.445787 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v24ff\" (UniqueName: \"kubernetes.io/projected/43747cdd-50ef-43df-b98d-a4d855984bb3-kube-api-access-v24ff\") pod \"community-operators-ms8z7\" (UID: \"43747cdd-50ef-43df-b98d-a4d855984bb3\") " pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.445925 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43747cdd-50ef-43df-b98d-a4d855984bb3-utilities\") pod \"community-operators-ms8z7\" (UID: \"43747cdd-50ef-43df-b98d-a4d855984bb3\") " pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.446162 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.446289 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43747cdd-50ef-43df-b98d-a4d855984bb3-catalog-content\") pod \"community-operators-ms8z7\" (UID: \"43747cdd-50ef-43df-b98d-a4d855984bb3\") " pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.446333 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqbvj\" (UniqueName: \"kubernetes.io/projected/e02c207e-d2f6-4c42-8e80-8967413395c0-kube-api-access-wqbvj\") pod \"certified-operators-sfsfv\" (UID: \"e02c207e-d2f6-4c42-8e80-8967413395c0\") " pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.451293 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43747cdd-50ef-43df-b98d-a4d855984bb3-utilities\") pod \"community-operators-ms8z7\" (UID: \"43747cdd-50ef-43df-b98d-a4d855984bb3\") " pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.452194 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43747cdd-50ef-43df-b98d-a4d855984bb3-catalog-content\") pod \"community-operators-ms8z7\" (UID: \"43747cdd-50ef-43df-b98d-a4d855984bb3\") " pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:08:21 crc kubenswrapper[4811]: E1203 00:08:21.452905 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:21.952882317 +0000 UTC m=+142.094711789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.488601 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pstnw" event={"ID":"9746028f-d836-467c-91d8-4530f09ac665","Type":"ContainerStarted","Data":"c6ae431875f7c58fccf4cb2c3656d003bc9faeed35eb85b5eb935519873b267c"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.512347 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6dpqh"] Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.512424 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nscpq" podStartSLOduration=123.512391341 podStartE2EDuration="2m3.512391341s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:21.489140325 +0000 UTC m=+141.630969797" watchObservedRunningTime="2025-12-03 00:08:21.512391341 +0000 UTC m=+141.654220813" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.526650 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.536782 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j64d2" event={"ID":"bbf9b6c9-1d96-4b24-b1d5-a3e7034af2c8","Type":"ContainerStarted","Data":"754add75025033b9ed767426709bd32791d7f63a027813a807cc5a7d26483c23"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.536823 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j64d2" event={"ID":"bbf9b6c9-1d96-4b24-b1d5-a3e7034af2c8","Type":"ContainerStarted","Data":"c6e505c05cd5ab0b871ae36d1205b65003893fe7e6b7cb1bd63b7f2e531bc002"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.540361 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dpqh"] Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.544614 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v24ff\" (UniqueName: \"kubernetes.io/projected/43747cdd-50ef-43df-b98d-a4d855984bb3-kube-api-access-v24ff\") pod \"community-operators-ms8z7\" (UID: \"43747cdd-50ef-43df-b98d-a4d855984bb3\") " pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.551454 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm" event={"ID":"4a505c1c-0ab7-4920-b43e-6475fae9b32b","Type":"ContainerStarted","Data":"1909b7b039f23bd175cbdf8f37336593038e08447bd599ca7861740028e8c220"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.552198 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.554585 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:21 crc kubenswrapper[4811]: E1203 00:08:21.554974 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:22.05493966 +0000 UTC m=+142.196769132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.555081 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.555163 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqbvj\" (UniqueName: \"kubernetes.io/projected/e02c207e-d2f6-4c42-8e80-8967413395c0-kube-api-access-wqbvj\") pod \"certified-operators-sfsfv\" (UID: \"e02c207e-d2f6-4c42-8e80-8967413395c0\") " pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.555229 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02c207e-d2f6-4c42-8e80-8967413395c0-utilities\") pod \"certified-operators-sfsfv\" (UID: \"e02c207e-d2f6-4c42-8e80-8967413395c0\") " pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.555357 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02c207e-d2f6-4c42-8e80-8967413395c0-catalog-content\") pod \"certified-operators-sfsfv\" (UID: \"e02c207e-d2f6-4c42-8e80-8967413395c0\") " pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:08:21 crc kubenswrapper[4811]: E1203 00:08:21.555685 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:22.055667189 +0000 UTC m=+142.197496661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.556225 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02c207e-d2f6-4c42-8e80-8967413395c0-catalog-content\") pod \"certified-operators-sfsfv\" (UID: \"e02c207e-d2f6-4c42-8e80-8967413395c0\") " pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.556696 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02c207e-d2f6-4c42-8e80-8967413395c0-utilities\") pod \"certified-operators-sfsfv\" (UID: \"e02c207e-d2f6-4c42-8e80-8967413395c0\") " pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.573188 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.584330 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ssdgl" event={"ID":"b80158bd-99fb-4f61-a9bc-b3f16b0ec9a0","Type":"ContainerStarted","Data":"b57fd219f9910572cb7786827ca3505eb48be498d54455b937004febf40342af"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.604750 4811 generic.go:334] "Generic (PLEG): container finished" podID="87d9c733-9e98-470e-9c8d-cb8b1817784c" containerID="755dce1b52a476c8afe68b3834e294c868e578cff1ae69bb7f703bf2c20ddf5a" exitCode=0 Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.604841 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" event={"ID":"87d9c733-9e98-470e-9c8d-cb8b1817784c","Type":"ContainerDied","Data":"755dce1b52a476c8afe68b3834e294c868e578cff1ae69bb7f703bf2c20ddf5a"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.635677 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqbvj\" (UniqueName: \"kubernetes.io/projected/e02c207e-d2f6-4c42-8e80-8967413395c0-kube-api-access-wqbvj\") pod \"certified-operators-sfsfv\" (UID: \"e02c207e-d2f6-4c42-8e80-8967413395c0\") " pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.635766 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq" event={"ID":"c8dc919c-6856-4c26-a76d-3ba3212fe7c3","Type":"ContainerStarted","Data":"e81d1ea45e566241708cc21f4bfff647df2fec92bef715774c33191108a88c76"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.658916 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.660028 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7kpj\" (UniqueName: \"kubernetes.io/projected/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-kube-api-access-s7kpj\") pod \"community-operators-6dpqh\" (UID: \"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d\") " pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.660106 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-catalog-content\") pod \"community-operators-6dpqh\" (UID: \"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d\") " pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.660215 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-utilities\") pod \"community-operators-6dpqh\" (UID: \"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d\") " pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:08:21 crc kubenswrapper[4811]: E1203 00:08:21.660403 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:22.16037688 +0000 UTC m=+142.302206352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.662198 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx" event={"ID":"901a390d-2893-422d-ac24-660162a0cc6e","Type":"ContainerStarted","Data":"e642042f0f851091ab462d25653a89bc2335c38514aff5984f6b4fdd633d3b90"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.672695 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" event={"ID":"7f7cb9ef-f206-4b06-918f-1ac96967e618","Type":"ContainerStarted","Data":"305a956608808a0afe924d986d0e8e7bbda60e4bdd3e0b47bb978e77b394cf92"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.673765 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.676112 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-22wtr" event={"ID":"d9d15a09-7646-4c97-ba16-3a1fd2d212ab","Type":"ContainerStarted","Data":"3cf9d593f8e1d2ffe8b00e135f54d90a006541b322dd582b1d06c2a3e00546a9"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.695044 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.695358 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" event={"ID":"e538a4e7-dd56-48db-828e-49af39ac5def","Type":"ContainerStarted","Data":"c745ce7a0e440e88a2576cdb73ec79e0b1649ad19773bd833f266317da3c69d5"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.705442 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.714574 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" event={"ID":"97cfb89f-0902-42fd-9e4d-a1459f2f2511","Type":"ContainerStarted","Data":"2a0f2e81e2333baae33f73d31e3a99d2bc68f20fab9110c2c17dcc91be36c82b"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.714631 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.725432 4811 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4ds94 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" start-of-body= Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.725685 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" podUID="97cfb89f-0902-42fd-9e4d-a1459f2f2511" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.36:6443/healthz\": dial tcp 10.217.0.36:6443: connect: connection refused" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.736018 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kqpch"] Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.737131 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.764843 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-catalog-content\") pod \"community-operators-6dpqh\" (UID: \"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d\") " pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.764907 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-utilities\") pod \"community-operators-6dpqh\" (UID: \"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d\") " pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.765138 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.765184 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7kpj\" (UniqueName: \"kubernetes.io/projected/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-kube-api-access-s7kpj\") pod \"community-operators-6dpqh\" (UID: \"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d\") " pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.766186 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-utilities\") pod \"community-operators-6dpqh\" (UID: \"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d\") " pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.766564 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-catalog-content\") pod \"community-operators-6dpqh\" (UID: \"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d\") " pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:08:21 crc kubenswrapper[4811]: E1203 00:08:21.768247 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:22.268226541 +0000 UTC m=+142.410056023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.775716 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm" podStartSLOduration=123.775690642 podStartE2EDuration="2m3.775690642s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:21.772351497 +0000 UTC m=+141.914180969" watchObservedRunningTime="2025-12-03 00:08:21.775690642 +0000 UTC m=+141.917520114" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.780582 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-956mn" event={"ID":"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401","Type":"ContainerStarted","Data":"a1d2bc9c58af5f510b3f2b3ba17cc69195216f7ea8b34e87518c5221514d44dd"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.781921 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-956mn" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.788745 4811 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-956mn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.788971 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-956mn" podUID="48ef7152-2ec1-4cfa-b2ab-88ff2fb42401" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.867623 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7kpj\" (UniqueName: \"kubernetes.io/projected/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-kube-api-access-s7kpj\") pod \"community-operators-6dpqh\" (UID: \"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d\") " pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.868672 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.869029 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw4ms\" (UniqueName: \"kubernetes.io/projected/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-kube-api-access-hw4ms\") pod \"certified-operators-kqpch\" (UID: \"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3\") " pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.869148 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-utilities\") pod \"certified-operators-kqpch\" (UID: \"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3\") " pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.869253 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-catalog-content\") pod \"certified-operators-kqpch\" (UID: \"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3\") " pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:08:21 crc kubenswrapper[4811]: E1203 00:08:21.872510 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:22.37248594 +0000 UTC m=+142.514315412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.887160 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kqpch"] Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.887333 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6jkt" event={"ID":"72e13ac0-ed91-41a8-8df4-ca88a2838fd3","Type":"ContainerStarted","Data":"3c5c0e9658cbabeb78cc608a67087909cdde34eb9976ba37008bcc4a7feac366"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.887366 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" event={"ID":"513c0923-f361-46af-8761-b4d809c1b287","Type":"ContainerStarted","Data":"64ffcf476d06c2b8f5c1c70ff749a38e04fafe956fb96b5260a5b3052b2e3742"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.905562 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" event={"ID":"5f915f72-36dd-40f3-a47c-7245505bf997","Type":"ContainerStarted","Data":"526710f0900e6435dec250174faa76188a4f209710bd28567908cb9b90f2de11"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.935993 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j64d2" podStartSLOduration=123.935975966 podStartE2EDuration="2m3.935975966s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:21.934667133 +0000 UTC m=+142.076496605" watchObservedRunningTime="2025-12-03 00:08:21.935975966 +0000 UTC m=+142.077805438" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.959634 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8447f" event={"ID":"d5716cc3-cd8a-40c5-82b2-7c1389e8eeaf","Type":"ContainerStarted","Data":"6ddbc7267760f64a6fdbba10c7a62eb5d962f62dfc24d4fd324050cc0444cc5c"} Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.983553 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.983623 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw4ms\" (UniqueName: \"kubernetes.io/projected/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-kube-api-access-hw4ms\") pod \"certified-operators-kqpch\" (UID: \"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3\") " pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.983686 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-utilities\") pod \"certified-operators-kqpch\" (UID: \"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3\") " pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.983781 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-catalog-content\") pod \"certified-operators-kqpch\" (UID: \"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3\") " pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:08:21 crc kubenswrapper[4811]: E1203 00:08:21.985110 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:22.485093644 +0000 UTC m=+142.626923286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.986023 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-utilities\") pod \"certified-operators-kqpch\" (UID: \"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3\") " pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.988778 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-catalog-content\") pod \"certified-operators-kqpch\" (UID: \"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3\") " pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:08:21 crc kubenswrapper[4811]: I1203 00:08:21.996780 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9jgcw" event={"ID":"cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d","Type":"ContainerStarted","Data":"8150ddea983fa54c4da3d4567ad565e1cce598120fbf0f30c5d06a6327a94e45"} Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:21.999876 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wpzs6" event={"ID":"4a68baef-4258-4aea-b775-172682cbf844","Type":"ContainerStarted","Data":"b0d01e81b1e61b51b9961d1eee90fd68fd6a24eb3e487b379a3dd3c87415e807"} Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.011873 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.027603 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" event={"ID":"2735423a-1c0a-489c-ada2-f5ba5aa58397","Type":"ContainerStarted","Data":"0a4501fc03dc6c1954e527aac0d81c94203e4733cc2625f72e6bf7b8cd9594be"} Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.030300 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw4ms\" (UniqueName: \"kubernetes.io/projected/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-kube-api-access-hw4ms\") pod \"certified-operators-kqpch\" (UID: \"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3\") " pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.031581 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-5j74h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.031636 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5j74h" podUID="6531f918-708c-4bb8-a418-d09dfb7a8b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.043089 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.084496 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:22 crc kubenswrapper[4811]: E1203 00:08:22.085426 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:22.585413052 +0000 UTC m=+142.727242524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.119682 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q87bq" podStartSLOduration=124.11966684 podStartE2EDuration="2m4.11966684s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:22.06070269 +0000 UTC m=+142.202532162" watchObservedRunningTime="2025-12-03 00:08:22.11966684 +0000 UTC m=+142.261496312" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.147085 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.235667 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.240995 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-ssdgl" podStartSLOduration=8.240961045 podStartE2EDuration="8.240961045s" podCreationTimestamp="2025-12-03 00:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:22.205332383 +0000 UTC m=+142.347161865" watchObservedRunningTime="2025-12-03 00:08:22.240961045 +0000 UTC m=+142.382790547" Dec 03 00:08:22 crc kubenswrapper[4811]: E1203 00:08:22.241336 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:22.741308915 +0000 UTC m=+142.883138607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.271978 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:22 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:22 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:22 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.272044 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.280937 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s6jkt" podStartSLOduration=124.280889248 podStartE2EDuration="2m4.280889248s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:22.257831667 +0000 UTC m=+142.399661139" watchObservedRunningTime="2025-12-03 00:08:22.280889248 +0000 UTC m=+142.422718720" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.318831 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ldnmm" podStartSLOduration=124.318806518 podStartE2EDuration="2m4.318806518s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:22.293745067 +0000 UTC m=+142.435574539" watchObservedRunningTime="2025-12-03 00:08:22.318806518 +0000 UTC m=+142.460635990" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.338467 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.345918 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-b8mqn" Dec 03 00:08:22 crc kubenswrapper[4811]: E1203 00:08:22.351207 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:22.851166227 +0000 UTC m=+142.992995699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.351470 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:22 crc kubenswrapper[4811]: E1203 00:08:22.351953 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:22.851940427 +0000 UTC m=+142.993769899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.406774 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-956mn" podStartSLOduration=124.406757591 podStartE2EDuration="2m4.406757591s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:22.405747124 +0000 UTC m=+142.547576586" watchObservedRunningTime="2025-12-03 00:08:22.406757591 +0000 UTC m=+142.548587063" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.454046 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9jgcw" podStartSLOduration=124.45402908 podStartE2EDuration="2m4.45402908s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:22.452703756 +0000 UTC m=+142.594533218" watchObservedRunningTime="2025-12-03 00:08:22.45402908 +0000 UTC m=+142.595858552" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.463098 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:22 crc kubenswrapper[4811]: E1203 00:08:22.463632 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:22.963610266 +0000 UTC m=+143.105439738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.495883 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" podStartSLOduration=125.495845321 podStartE2EDuration="2m5.495845321s" podCreationTimestamp="2025-12-03 00:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:22.494838736 +0000 UTC m=+142.636668218" watchObservedRunningTime="2025-12-03 00:08:22.495845321 +0000 UTC m=+142.637674793" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.568001 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:22 crc kubenswrapper[4811]: E1203 00:08:22.568543 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:23.068528642 +0000 UTC m=+143.210358114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.569877 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x87s8" podStartSLOduration=124.569867716 podStartE2EDuration="2m4.569867716s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:22.538291768 +0000 UTC m=+142.680121240" watchObservedRunningTime="2025-12-03 00:08:22.569867716 +0000 UTC m=+142.711697188" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.570174 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nj6b5" podStartSLOduration=124.570170244 podStartE2EDuration="2m4.570170244s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:22.569106157 +0000 UTC m=+142.710935629" watchObservedRunningTime="2025-12-03 00:08:22.570170244 +0000 UTC m=+142.711999716" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.626250 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rk4f" podStartSLOduration=125.626234289 podStartE2EDuration="2m5.626234289s" podCreationTimestamp="2025-12-03 00:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:22.624960848 +0000 UTC m=+142.766790320" watchObservedRunningTime="2025-12-03 00:08:22.626234289 +0000 UTC m=+142.768063761" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.670861 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:22 crc kubenswrapper[4811]: E1203 00:08:22.671557 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:23.17154214 +0000 UTC m=+143.313371612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.692518 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfsfv"] Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.773926 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:22 crc kubenswrapper[4811]: E1203 00:08:22.774695 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:23.274642449 +0000 UTC m=+143.416471921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.789887 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7fqbx" podStartSLOduration=124.789870079 podStartE2EDuration="2m4.789870079s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:22.788277758 +0000 UTC m=+142.930107230" watchObservedRunningTime="2025-12-03 00:08:22.789870079 +0000 UTC m=+142.931699541" Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.876136 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:22 crc kubenswrapper[4811]: E1203 00:08:22.877247 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:23.377221456 +0000 UTC m=+143.519050928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:22 crc kubenswrapper[4811]: I1203 00:08:22.978120 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:22 crc kubenswrapper[4811]: E1203 00:08:22.978519 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:23.478507529 +0000 UTC m=+143.620337001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.052344 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wbv94" event={"ID":"d5580144-bfbe-490a-a448-2c225be80621","Type":"ContainerStarted","Data":"ffe1fd6cc063edb0a20da04138add47a316c6f901d11f0c53ddb57128c28ea36"} Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.070895 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xj4zn"] Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.073406 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.083807 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:23 crc kubenswrapper[4811]: E1203 00:08:23.083888 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:23.583872957 +0000 UTC m=+143.725702429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.084439 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:23 crc kubenswrapper[4811]: E1203 00:08:23.084929 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:23.584878163 +0000 UTC m=+143.726707635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.099079 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.102219 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ms8z7"] Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.114806 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pstnw" event={"ID":"9746028f-d836-467c-91d8-4530f09ac665","Type":"ContainerStarted","Data":"31a122b86b8e8231ecfe9ba8274b98523d173449afb0110eee2700cad243d1b4"} Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.124000 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj4zn"] Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.181593 4811 generic.go:334] "Generic (PLEG): container finished" podID="5f915f72-36dd-40f3-a47c-7245505bf997" containerID="526710f0900e6435dec250174faa76188a4f209710bd28567908cb9b90f2de11" exitCode=0 Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.182051 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" event={"ID":"5f915f72-36dd-40f3-a47c-7245505bf997","Type":"ContainerDied","Data":"526710f0900e6435dec250174faa76188a4f209710bd28567908cb9b90f2de11"} Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.193298 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.193645 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0c5586-e964-4734-a361-bcc6d34dfc8b-catalog-content\") pod \"redhat-marketplace-xj4zn\" (UID: \"3f0c5586-e964-4734-a361-bcc6d34dfc8b\") " pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.193794 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0c5586-e964-4734-a361-bcc6d34dfc8b-utilities\") pod \"redhat-marketplace-xj4zn\" (UID: \"3f0c5586-e964-4734-a361-bcc6d34dfc8b\") " pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.193827 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgmsf\" (UniqueName: \"kubernetes.io/projected/3f0c5586-e964-4734-a361-bcc6d34dfc8b-kube-api-access-hgmsf\") pod \"redhat-marketplace-xj4zn\" (UID: \"3f0c5586-e964-4734-a361-bcc6d34dfc8b\") " pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:08:23 crc kubenswrapper[4811]: E1203 00:08:23.194499 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:23.694475389 +0000 UTC m=+143.836304861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.224554 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-22wtr" event={"ID":"d9d15a09-7646-4c97-ba16-3a1fd2d212ab","Type":"ContainerStarted","Data":"d181149bb09863815939763d0c405915a763719a549e05daf2f4b5ad980eccea"} Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.266221 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" event={"ID":"87d9c733-9e98-470e-9c8d-cb8b1817784c","Type":"ContainerStarted","Data":"cca4666941bf4a696ac3430e6bf8f6546a3b1bce446c477ced9194d8aa536ca0"} Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.266919 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.274696 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:23 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:23 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:23 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.274781 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.292190 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" event={"ID":"ac601043-ae94-4202-a446-ed478524075b","Type":"ContainerStarted","Data":"f9916c4d62e3d3e1098de7195b5e3fa1df0e5b13beb2b5ea2cccd555ee6d7999"} Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.299056 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.299153 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0c5586-e964-4734-a361-bcc6d34dfc8b-utilities\") pod \"redhat-marketplace-xj4zn\" (UID: \"3f0c5586-e964-4734-a361-bcc6d34dfc8b\") " pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.299181 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgmsf\" (UniqueName: \"kubernetes.io/projected/3f0c5586-e964-4734-a361-bcc6d34dfc8b-kube-api-access-hgmsf\") pod \"redhat-marketplace-xj4zn\" (UID: \"3f0c5586-e964-4734-a361-bcc6d34dfc8b\") " pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.299228 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0c5586-e964-4734-a361-bcc6d34dfc8b-catalog-content\") pod \"redhat-marketplace-xj4zn\" (UID: \"3f0c5586-e964-4734-a361-bcc6d34dfc8b\") " pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:08:23 crc kubenswrapper[4811]: E1203 00:08:23.300661 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:23.800647717 +0000 UTC m=+143.942477189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.301199 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0c5586-e964-4734-a361-bcc6d34dfc8b-utilities\") pod \"redhat-marketplace-xj4zn\" (UID: \"3f0c5586-e964-4734-a361-bcc6d34dfc8b\") " pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.301209 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0c5586-e964-4734-a361-bcc6d34dfc8b-catalog-content\") pod \"redhat-marketplace-xj4zn\" (UID: \"3f0c5586-e964-4734-a361-bcc6d34dfc8b\") " pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.341318 4811 generic.go:334] "Generic (PLEG): container finished" podID="fc347b57-7a13-480c-b630-f2486ce233fc" containerID="90d1a096b7f6e9e72106f5cdf7372be9b87e5ac3d7766f9fbf886b70bd74bf72" exitCode=0 Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.341590 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" event={"ID":"fc347b57-7a13-480c-b630-f2486ce233fc","Type":"ContainerDied","Data":"90d1a096b7f6e9e72106f5cdf7372be9b87e5ac3d7766f9fbf886b70bd74bf72"} Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.360700 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-pstnw" podStartSLOduration=125.360671454 podStartE2EDuration="2m5.360671454s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:23.261920465 +0000 UTC m=+143.403749937" watchObservedRunningTime="2025-12-03 00:08:23.360671454 +0000 UTC m=+143.502500926" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.369844 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-22wtr" podStartSLOduration=125.369830379 podStartE2EDuration="2m5.369830379s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:23.333889868 +0000 UTC m=+143.475719340" watchObservedRunningTime="2025-12-03 00:08:23.369830379 +0000 UTC m=+143.511659841" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.382181 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfsfv" event={"ID":"e02c207e-d2f6-4c42-8e80-8967413395c0","Type":"ContainerStarted","Data":"da5a9af32023b256e504d99d49b60651999f568367309954ead3611e89394bfd"} Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.399839 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgmsf\" (UniqueName: \"kubernetes.io/projected/3f0c5586-e964-4734-a361-bcc6d34dfc8b-kube-api-access-hgmsf\") pod \"redhat-marketplace-xj4zn\" (UID: \"3f0c5586-e964-4734-a361-bcc6d34dfc8b\") " pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.414133 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:23 crc kubenswrapper[4811]: E1203 00:08:23.414534 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:23.914503962 +0000 UTC m=+144.056333434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.425427 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cdlqr" event={"ID":"2393a548-e232-4d77-bbc2-f3daa72338c4","Type":"ContainerStarted","Data":"d9100ae0c2b6e1d74c7e9044d7136efea2fb28a6eb33e156b1c9478aaf19a248"} Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.425465 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-cdlqr" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.425476 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cdlqr" event={"ID":"2393a548-e232-4d77-bbc2-f3daa72338c4","Type":"ContainerStarted","Data":"e47e5a3beb76e6d298c78d765021de447b7b9b8c14242ec21d5a0e81788718f5"} Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.430423 4811 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-956mn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.430483 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-956mn" podUID="48ef7152-2ec1-4cfa-b2ab-88ff2fb42401" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.445546 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" podStartSLOduration=125.445518247 podStartE2EDuration="2m5.445518247s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:23.393731271 +0000 UTC m=+143.535560733" watchObservedRunningTime="2025-12-03 00:08:23.445518247 +0000 UTC m=+143.587347719" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.454188 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.462016 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r5kqq"] Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.463001 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.521941 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:23 crc kubenswrapper[4811]: E1203 00:08:23.525902 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:24.025882554 +0000 UTC m=+144.167712026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.541227 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5kqq"] Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.593322 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-cdlqr" podStartSLOduration=9.59330604 podStartE2EDuration="9.59330604s" podCreationTimestamp="2025-12-03 00:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:23.591637428 +0000 UTC m=+143.733466900" watchObservedRunningTime="2025-12-03 00:08:23.59330604 +0000 UTC m=+143.735135512" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.609339 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dpqh"] Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.629670 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.630468 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnld2\" (UniqueName: \"kubernetes.io/projected/a12a877a-9029-4eed-919f-6b21efa268ab-kube-api-access-lnld2\") pod \"redhat-marketplace-r5kqq\" (UID: \"a12a877a-9029-4eed-919f-6b21efa268ab\") " pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.630568 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12a877a-9029-4eed-919f-6b21efa268ab-catalog-content\") pod \"redhat-marketplace-r5kqq\" (UID: \"a12a877a-9029-4eed-919f-6b21efa268ab\") " pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.630589 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12a877a-9029-4eed-919f-6b21efa268ab-utilities\") pod \"redhat-marketplace-r5kqq\" (UID: \"a12a877a-9029-4eed-919f-6b21efa268ab\") " pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:08:23 crc kubenswrapper[4811]: E1203 00:08:23.630775 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:24.130731679 +0000 UTC m=+144.272561151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.735213 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnld2\" (UniqueName: \"kubernetes.io/projected/a12a877a-9029-4eed-919f-6b21efa268ab-kube-api-access-lnld2\") pod \"redhat-marketplace-r5kqq\" (UID: \"a12a877a-9029-4eed-919f-6b21efa268ab\") " pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.735635 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12a877a-9029-4eed-919f-6b21efa268ab-catalog-content\") pod \"redhat-marketplace-r5kqq\" (UID: \"a12a877a-9029-4eed-919f-6b21efa268ab\") " pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.735660 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12a877a-9029-4eed-919f-6b21efa268ab-utilities\") pod \"redhat-marketplace-r5kqq\" (UID: \"a12a877a-9029-4eed-919f-6b21efa268ab\") " pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.735689 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:23 crc kubenswrapper[4811]: E1203 00:08:23.735940 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:24.235930692 +0000 UTC m=+144.377760164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.736445 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12a877a-9029-4eed-919f-6b21efa268ab-catalog-content\") pod \"redhat-marketplace-r5kqq\" (UID: \"a12a877a-9029-4eed-919f-6b21efa268ab\") " pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.736650 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12a877a-9029-4eed-919f-6b21efa268ab-utilities\") pod \"redhat-marketplace-r5kqq\" (UID: \"a12a877a-9029-4eed-919f-6b21efa268ab\") " pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.789888 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnld2\" (UniqueName: \"kubernetes.io/projected/a12a877a-9029-4eed-919f-6b21efa268ab-kube-api-access-lnld2\") pod \"redhat-marketplace-r5kqq\" (UID: \"a12a877a-9029-4eed-919f-6b21efa268ab\") " pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.837328 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:23 crc kubenswrapper[4811]: E1203 00:08:23.838089 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:24.338063547 +0000 UTC m=+144.479893019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.885351 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kqpch"] Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.896846 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.911738 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:08:23 crc kubenswrapper[4811]: I1203 00:08:23.939544 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:23 crc kubenswrapper[4811]: E1203 00:08:23.939992 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:24.439968497 +0000 UTC m=+144.581797969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.042009 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:24 crc kubenswrapper[4811]: E1203 00:08:24.042849 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:24.54283211 +0000 UTC m=+144.684661582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.145640 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:24 crc kubenswrapper[4811]: E1203 00:08:24.146162 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:24.646146216 +0000 UTC m=+144.787975688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.175652 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.176871 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.176976 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.206485 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.206848 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.265323 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:24 crc kubenswrapper[4811]: E1203 00:08:24.265983 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:24.765948193 +0000 UTC m=+144.907777665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.278332 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:24 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:24 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:24 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.278412 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.322919 4811 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.355363 4811 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-03T00:08:24.322954372Z","Handler":null,"Name":""} Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.375176 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6cafc64-e623-435f-b1e3-161451556900-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e6cafc64-e623-435f-b1e3-161451556900\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.375236 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.375344 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6cafc64-e623-435f-b1e3-161451556900-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e6cafc64-e623-435f-b1e3-161451556900\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:24 crc kubenswrapper[4811]: E1203 00:08:24.376021 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:24.87599559 +0000 UTC m=+145.017825062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cgwfj" (UID: "71d18873-465e-4bc9-aca1-149975060eff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.418270 4811 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.418323 4811 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.453770 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bqlcb"] Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.454799 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.471855 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.472366 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" event={"ID":"5f915f72-36dd-40f3-a47c-7245505bf997","Type":"ContainerStarted","Data":"745462ce8465c7b1f64fe7c92c411e7101d48cfda0dc68a94351e4d1f95539a9"} Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.480034 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.480773 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6cafc64-e623-435f-b1e3-161451556900-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e6cafc64-e623-435f-b1e3-161451556900\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.480922 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6cafc64-e623-435f-b1e3-161451556900-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e6cafc64-e623-435f-b1e3-161451556900\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.480951 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41ca1166-555e-4be2-b998-59bad45528df-catalog-content\") pod \"redhat-operators-bqlcb\" (UID: \"41ca1166-555e-4be2-b998-59bad45528df\") " pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.481067 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41ca1166-555e-4be2-b998-59bad45528df-utilities\") pod \"redhat-operators-bqlcb\" (UID: \"41ca1166-555e-4be2-b998-59bad45528df\") " pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.481148 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnn7v\" (UniqueName: \"kubernetes.io/projected/41ca1166-555e-4be2-b998-59bad45528df-kube-api-access-hnn7v\") pod \"redhat-operators-bqlcb\" (UID: \"41ca1166-555e-4be2-b998-59bad45528df\") " pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.481860 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6cafc64-e623-435f-b1e3-161451556900-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e6cafc64-e623-435f-b1e3-161451556900\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.481884 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bqlcb"] Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.519869 4811 generic.go:334] "Generic (PLEG): container finished" podID="e02c207e-d2f6-4c42-8e80-8967413395c0" containerID="ccf7e8fbec21327a164a591e3948745bd9a1c63ebf6ab5bbb7669ee6d937152f" exitCode=0 Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.519972 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfsfv" event={"ID":"e02c207e-d2f6-4c42-8e80-8967413395c0","Type":"ContainerDied","Data":"ccf7e8fbec21327a164a591e3948745bd9a1c63ebf6ab5bbb7669ee6d937152f"} Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.521341 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.525010 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.533578 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6cafc64-e623-435f-b1e3-161451556900-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e6cafc64-e623-435f-b1e3-161451556900\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.542202 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kqpch" event={"ID":"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3","Type":"ContainerStarted","Data":"b1374ce92de3a5fcdb0db6367a2df52a231b5a4e649df90c9fb37006af45a6c7"} Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.542448 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kqpch" event={"ID":"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3","Type":"ContainerStarted","Data":"f1f4947ebd67d5a452debc99195c8b6b2e9c2f7dc52e3a06d90e0efa8a8bb127"} Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.579664 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" podStartSLOduration=126.579639905 podStartE2EDuration="2m6.579639905s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:24.578335992 +0000 UTC m=+144.720165464" watchObservedRunningTime="2025-12-03 00:08:24.579639905 +0000 UTC m=+144.721469377" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.582541 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41ca1166-555e-4be2-b998-59bad45528df-catalog-content\") pod \"redhat-operators-bqlcb\" (UID: \"41ca1166-555e-4be2-b998-59bad45528df\") " pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.582596 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.582674 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41ca1166-555e-4be2-b998-59bad45528df-utilities\") pod \"redhat-operators-bqlcb\" (UID: \"41ca1166-555e-4be2-b998-59bad45528df\") " pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.582737 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnn7v\" (UniqueName: \"kubernetes.io/projected/41ca1166-555e-4be2-b998-59bad45528df-kube-api-access-hnn7v\") pod \"redhat-operators-bqlcb\" (UID: \"41ca1166-555e-4be2-b998-59bad45528df\") " pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.588633 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.588633 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" event={"ID":"ac601043-ae94-4202-a446-ed478524075b","Type":"ContainerStarted","Data":"b0f3d1942ae3564f6fe5f5e8476543b67d3b4ae54d856e68fa41c8955bc03edc"} Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.611830 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.611973 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.623465 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41ca1166-555e-4be2-b998-59bad45528df-catalog-content\") pod \"redhat-operators-bqlcb\" (UID: \"41ca1166-555e-4be2-b998-59bad45528df\") " pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.624657 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41ca1166-555e-4be2-b998-59bad45528df-utilities\") pod \"redhat-operators-bqlcb\" (UID: \"41ca1166-555e-4be2-b998-59bad45528df\") " pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.630230 4811 generic.go:334] "Generic (PLEG): container finished" podID="43747cdd-50ef-43df-b98d-a4d855984bb3" containerID="434eb47b63d7255af346c68a2f85afbccf09f0b2c4c9aeeadc7e0c737fded88b" exitCode=0 Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.630990 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ms8z7" event={"ID":"43747cdd-50ef-43df-b98d-a4d855984bb3","Type":"ContainerDied","Data":"434eb47b63d7255af346c68a2f85afbccf09f0b2c4c9aeeadc7e0c737fded88b"} Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.631539 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ms8z7" event={"ID":"43747cdd-50ef-43df-b98d-a4d855984bb3","Type":"ContainerStarted","Data":"4f98333a99ca5aabf1c34b055ad08a4b0da6b85f24f3e158cfb700f34520f475"} Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.634153 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnn7v\" (UniqueName: \"kubernetes.io/projected/41ca1166-555e-4be2-b998-59bad45528df-kube-api-access-hnn7v\") pod \"redhat-operators-bqlcb\" (UID: \"41ca1166-555e-4be2-b998-59bad45528df\") " pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.676098 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.766329 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wbv94" event={"ID":"d5580144-bfbe-490a-a448-2c225be80621","Type":"ContainerStarted","Data":"f088d491dc219abfb23a0e4dacf0a5cbc691a8f96a0afb825b66ad78fea8dbb3"} Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.782380 4811 generic.go:334] "Generic (PLEG): container finished" podID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" containerID="78fd001f4096de433871e6b3450de1eb219e9b3c4f60247b222f7a056eed499e" exitCode=0 Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.782924 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dpqh" event={"ID":"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d","Type":"ContainerDied","Data":"78fd001f4096de433871e6b3450de1eb219e9b3c4f60247b222f7a056eed499e"} Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.783056 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dpqh" event={"ID":"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d","Type":"ContainerStarted","Data":"7f0eeab000c62a3af8fa84bfa36a22b67ed5d0cc5aaf50c504d8d5d0c92ccd2e"} Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.810632 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-956mn" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.848991 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-wbv94" podStartSLOduration=127.84895645 podStartE2EDuration="2m7.84895645s" podCreationTimestamp="2025-12-03 00:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:24.811506071 +0000 UTC m=+144.953335543" watchObservedRunningTime="2025-12-03 00:08:24.84895645 +0000 UTC m=+144.990785922" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.864352 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj4zn"] Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.898912 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5kqq"] Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.901490 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6cr44"] Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.921242 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.950013 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cgwfj\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:24 crc kubenswrapper[4811]: I1203 00:08:24.950885 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6cr44"] Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.045412 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.103726 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptxkz\" (UniqueName: \"kubernetes.io/projected/7f133170-9779-4a12-86d0-43c6e9c16da8-kube-api-access-ptxkz\") pod \"redhat-operators-6cr44\" (UID: \"7f133170-9779-4a12-86d0-43c6e9c16da8\") " pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.104303 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f133170-9779-4a12-86d0-43c6e9c16da8-catalog-content\") pod \"redhat-operators-6cr44\" (UID: \"7f133170-9779-4a12-86d0-43c6e9c16da8\") " pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.104339 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f133170-9779-4a12-86d0-43c6e9c16da8-utilities\") pod \"redhat-operators-6cr44\" (UID: \"7f133170-9779-4a12-86d0-43c6e9c16da8\") " pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.213465 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f133170-9779-4a12-86d0-43c6e9c16da8-catalog-content\") pod \"redhat-operators-6cr44\" (UID: \"7f133170-9779-4a12-86d0-43c6e9c16da8\") " pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.213539 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f133170-9779-4a12-86d0-43c6e9c16da8-utilities\") pod \"redhat-operators-6cr44\" (UID: \"7f133170-9779-4a12-86d0-43c6e9c16da8\") " pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.213605 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptxkz\" (UniqueName: \"kubernetes.io/projected/7f133170-9779-4a12-86d0-43c6e9c16da8-kube-api-access-ptxkz\") pod \"redhat-operators-6cr44\" (UID: \"7f133170-9779-4a12-86d0-43c6e9c16da8\") " pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.214672 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f133170-9779-4a12-86d0-43c6e9c16da8-catalog-content\") pod \"redhat-operators-6cr44\" (UID: \"7f133170-9779-4a12-86d0-43c6e9c16da8\") " pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.215128 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f133170-9779-4a12-86d0-43c6e9c16da8-utilities\") pod \"redhat-operators-6cr44\" (UID: \"7f133170-9779-4a12-86d0-43c6e9c16da8\") " pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.267189 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptxkz\" (UniqueName: \"kubernetes.io/projected/7f133170-9779-4a12-86d0-43c6e9c16da8-kube-api-access-ptxkz\") pod \"redhat-operators-6cr44\" (UID: \"7f133170-9779-4a12-86d0-43c6e9c16da8\") " pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.348283 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:25 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:25 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:25 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.348361 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.553747 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.651984 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.681601 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.712730 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bqlcb"] Dec 03 00:08:25 crc kubenswrapper[4811]: W1203 00:08:25.750616 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41ca1166_555e_4be2_b998_59bad45528df.slice/crio-b877bd9a58e01c2f1dcb9cac4106e9982471bb8eb7432f2e3d1d1fc7223baab9 WatchSource:0}: Error finding container b877bd9a58e01c2f1dcb9cac4106e9982471bb8eb7432f2e3d1d1fc7223baab9: Status 404 returned error can't find the container with id b877bd9a58e01c2f1dcb9cac4106e9982471bb8eb7432f2e3d1d1fc7223baab9 Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.809721 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" event={"ID":"fc347b57-7a13-480c-b630-f2486ce233fc","Type":"ContainerDied","Data":"15ea85ad76ec5b37e8e1d449fe84c0e22d1963d63f56f48ace7b1e56888b563e"} Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.810164 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15ea85ad76ec5b37e8e1d449fe84c0e22d1963d63f56f48ace7b1e56888b563e" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.809767 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-vfb2p" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.811025 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqlcb" event={"ID":"41ca1166-555e-4be2-b998-59bad45528df","Type":"ContainerStarted","Data":"b877bd9a58e01c2f1dcb9cac4106e9982471bb8eb7432f2e3d1d1fc7223baab9"} Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.823900 4811 generic.go:334] "Generic (PLEG): container finished" podID="7ae6eb53-bd86-41f5-bc17-3c5eb1220af3" containerID="b1374ce92de3a5fcdb0db6367a2df52a231b5a4e649df90c9fb37006af45a6c7" exitCode=0 Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.824029 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kqpch" event={"ID":"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3","Type":"ContainerDied","Data":"b1374ce92de3a5fcdb0db6367a2df52a231b5a4e649df90c9fb37006af45a6c7"} Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.826506 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" event={"ID":"ac601043-ae94-4202-a446-ed478524075b","Type":"ContainerStarted","Data":"09ec6e04cfba6cdf91a2d1aa8a49fc60edc32c4f06895a17731eb813be51e6c2"} Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.826555 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" event={"ID":"ac601043-ae94-4202-a446-ed478524075b","Type":"ContainerStarted","Data":"505ecc5cb5cae3fc91c57601e15fb7c36983f1e367c6a1c214899187ef7f63b6"} Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.828377 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqxgq\" (UniqueName: \"kubernetes.io/projected/fc347b57-7a13-480c-b630-f2486ce233fc-kube-api-access-xqxgq\") pod \"fc347b57-7a13-480c-b630-f2486ce233fc\" (UID: \"fc347b57-7a13-480c-b630-f2486ce233fc\") " Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.828429 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc347b57-7a13-480c-b630-f2486ce233fc-secret-volume\") pod \"fc347b57-7a13-480c-b630-f2486ce233fc\" (UID: \"fc347b57-7a13-480c-b630-f2486ce233fc\") " Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.828541 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc347b57-7a13-480c-b630-f2486ce233fc-config-volume\") pod \"fc347b57-7a13-480c-b630-f2486ce233fc\" (UID: \"fc347b57-7a13-480c-b630-f2486ce233fc\") " Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.829834 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc347b57-7a13-480c-b630-f2486ce233fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "fc347b57-7a13-480c-b630-f2486ce233fc" (UID: "fc347b57-7a13-480c-b630-f2486ce233fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.840673 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc347b57-7a13-480c-b630-f2486ce233fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fc347b57-7a13-480c-b630-f2486ce233fc" (UID: "fc347b57-7a13-480c-b630-f2486ce233fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.841178 4811 generic.go:334] "Generic (PLEG): container finished" podID="a12a877a-9029-4eed-919f-6b21efa268ab" containerID="31a02a8322ae9e3965aa7ddf8d102ad46901ccb680ac1487d9504e963d2a833a" exitCode=0 Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.841378 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5kqq" event={"ID":"a12a877a-9029-4eed-919f-6b21efa268ab","Type":"ContainerDied","Data":"31a02a8322ae9e3965aa7ddf8d102ad46901ccb680ac1487d9504e963d2a833a"} Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.841479 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5kqq" event={"ID":"a12a877a-9029-4eed-919f-6b21efa268ab","Type":"ContainerStarted","Data":"1ebaa3aca8dff2fea5fd731408075036a7a6483fb05f730d84925f79dce510f8"} Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.841985 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc347b57-7a13-480c-b630-f2486ce233fc-kube-api-access-xqxgq" (OuterVolumeSpecName: "kube-api-access-xqxgq") pod "fc347b57-7a13-480c-b630-f2486ce233fc" (UID: "fc347b57-7a13-480c-b630-f2486ce233fc"). InnerVolumeSpecName "kube-api-access-xqxgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.875089 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj4zn" event={"ID":"3f0c5586-e964-4734-a361-bcc6d34dfc8b","Type":"ContainerDied","Data":"b1f76391188410635e90f935efd06f21f7a4aab0801583fe52d5107b0b2a3d47"} Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.877206 4811 generic.go:334] "Generic (PLEG): container finished" podID="3f0c5586-e964-4734-a361-bcc6d34dfc8b" containerID="b1f76391188410635e90f935efd06f21f7a4aab0801583fe52d5107b0b2a3d47" exitCode=0 Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.878572 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj4zn" event={"ID":"3f0c5586-e964-4734-a361-bcc6d34dfc8b","Type":"ContainerStarted","Data":"74b75ad3a433aa02dd0cefd660a91967dd2519819679ef0ccae00aa24759d304"} Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.885254 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6cafc64-e623-435f-b1e3-161451556900","Type":"ContainerStarted","Data":"dff6fb53e2a0b7846c1a450e2fe00e803656e0d8c82adaa5aaff559e20c5ed72"} Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.936811 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc347b57-7a13-480c-b630-f2486ce233fc-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.936842 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqxgq\" (UniqueName: \"kubernetes.io/projected/fc347b57-7a13-480c-b630-f2486ce233fc-kube-api-access-xqxgq\") on node \"crc\" DevicePath \"\"" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.936855 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc347b57-7a13-480c-b630-f2486ce233fc-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.941623 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cgwfj"] Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.942997 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-chklk" Dec 03 00:08:25 crc kubenswrapper[4811]: I1203 00:08:25.991483 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-zvgcl" podStartSLOduration=11.991446193 podStartE2EDuration="11.991446193s" podCreationTimestamp="2025-12-03 00:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:25.959781922 +0000 UTC m=+146.101611394" watchObservedRunningTime="2025-12-03 00:08:25.991446193 +0000 UTC m=+146.133275665" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.041513 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.041680 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.041755 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.041835 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.047358 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6cr44"] Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.048804 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.055190 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.055719 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.072440 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.129428 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.240281 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.259862 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.275424 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:26 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:26 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:26 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.275502 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.275676 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.720612 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-5j74h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.722484 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5j74h" podUID="6531f918-708c-4bb8-a418-d09dfb7a8b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.734276 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-5j74h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.734358 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5j74h" podUID="6531f918-708c-4bb8-a418-d09dfb7a8b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.915048 4811 generic.go:334] "Generic (PLEG): container finished" podID="41ca1166-555e-4be2-b998-59bad45528df" containerID="90fdba381f8ac502cd6781ddd513f3c9daa11f0db70399a1cfa58676ffcb653b" exitCode=0 Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.915157 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqlcb" event={"ID":"41ca1166-555e-4be2-b998-59bad45528df","Type":"ContainerDied","Data":"90fdba381f8ac502cd6781ddd513f3c9daa11f0db70399a1cfa58676ffcb653b"} Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.919110 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" event={"ID":"71d18873-465e-4bc9-aca1-149975060eff","Type":"ContainerStarted","Data":"b26b6b30e2a2f5c71d2e13f8307836d7752db02eb13ee5aefd998b60c8ca22e5"} Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.919163 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" event={"ID":"71d18873-465e-4bc9-aca1-149975060eff","Type":"ContainerStarted","Data":"dacfb284d444cbfef7a00639d6929d48a64a46301354a118a826fae482626745"} Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.919837 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.923969 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.924155 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.925236 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e4eb4f7a222df741184c7c831ed5e0d7aec4afa70923471ba059416b6afd7baf"} Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.929409 4811 generic.go:334] "Generic (PLEG): container finished" podID="7f133170-9779-4a12-86d0-43c6e9c16da8" containerID="57018271f291c13fe5814d677ecf11674ef4132337da1e9b629f3b039fc308d8" exitCode=0 Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.929497 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6cr44" event={"ID":"7f133170-9779-4a12-86d0-43c6e9c16da8","Type":"ContainerDied","Data":"57018271f291c13fe5814d677ecf11674ef4132337da1e9b629f3b039fc308d8"} Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.929547 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6cr44" event={"ID":"7f133170-9779-4a12-86d0-43c6e9c16da8","Type":"ContainerStarted","Data":"341fbd99ae456a0c26791535c8ff1d2ab002bdcdb19ba6df487e897ad591e2e2"} Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.933026 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6cafc64-e623-435f-b1e3-161451556900","Type":"ContainerStarted","Data":"d46276188b1601f8f163be4f3dd01222b83d4a37a4e0bf9f5f763efbd40c0755"} Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.948250 4811 patch_prober.go:28] interesting pod/apiserver-76f77b778f-wbv94 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 00:08:26 crc kubenswrapper[4811]: [+]log ok Dec 03 00:08:26 crc kubenswrapper[4811]: [+]etcd ok Dec 03 00:08:26 crc kubenswrapper[4811]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 03 00:08:26 crc kubenswrapper[4811]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 00:08:26 crc kubenswrapper[4811]: [+]poststarthook/max-in-flight-filter ok Dec 03 00:08:26 crc kubenswrapper[4811]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 00:08:26 crc kubenswrapper[4811]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 03 00:08:26 crc kubenswrapper[4811]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 03 00:08:26 crc kubenswrapper[4811]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 03 00:08:26 crc kubenswrapper[4811]: [+]poststarthook/project.openshift.io-projectcache ok Dec 03 00:08:26 crc kubenswrapper[4811]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 03 00:08:26 crc kubenswrapper[4811]: [+]poststarthook/openshift.io-startinformers ok Dec 03 00:08:26 crc kubenswrapper[4811]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 03 00:08:26 crc kubenswrapper[4811]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 03 00:08:26 crc kubenswrapper[4811]: livez check failed Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.948363 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-wbv94" podUID="d5580144-bfbe-490a-a448-2c225be80621" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.951029 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e3a707a5267cc0128a13292f68b4b7f9cf70db6e34689620e69398180455b762"} Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.955224 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ea1d291e86015d42c4022c46f52a9750a72abff17d6f8220cc8f1b72633d0fe3"} Dec 03 00:08:26 crc kubenswrapper[4811]: I1203 00:08:26.968766 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.968735455 podStartE2EDuration="2.968735455s" podCreationTimestamp="2025-12-03 00:08:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:26.968509619 +0000 UTC m=+147.110339091" watchObservedRunningTime="2025-12-03 00:08:26.968735455 +0000 UTC m=+147.110564927" Dec 03 00:08:27 crc kubenswrapper[4811]: I1203 00:08:27.038052 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" podStartSLOduration=129.038017298 podStartE2EDuration="2m9.038017298s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:26.992193946 +0000 UTC m=+147.134023418" watchObservedRunningTime="2025-12-03 00:08:27.038017298 +0000 UTC m=+147.179846790" Dec 03 00:08:27 crc kubenswrapper[4811]: I1203 00:08:27.236806 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:27 crc kubenswrapper[4811]: I1203 00:08:27.236877 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:27 crc kubenswrapper[4811]: I1203 00:08:27.254276 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:27 crc kubenswrapper[4811]: I1203 00:08:27.259876 4811 patch_prober.go:28] interesting pod/console-f9d7485db-9jgcw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.37:8443/health\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Dec 03 00:08:27 crc kubenswrapper[4811]: I1203 00:08:27.259920 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9jgcw" podUID="cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.37:8443/health\": dial tcp 10.217.0.37:8443: connect: connection refused" Dec 03 00:08:27 crc kubenswrapper[4811]: I1203 00:08:27.260663 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:27 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:27 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:27 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:27 crc kubenswrapper[4811]: I1203 00:08:27.260694 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:27 crc kubenswrapper[4811]: I1203 00:08:27.567225 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:27 crc kubenswrapper[4811]: I1203 00:08:27.567821 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:27 crc kubenswrapper[4811]: I1203 00:08:27.581050 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.001691 4811 generic.go:334] "Generic (PLEG): container finished" podID="e6cafc64-e623-435f-b1e3-161451556900" containerID="d46276188b1601f8f163be4f3dd01222b83d4a37a4e0bf9f5f763efbd40c0755" exitCode=0 Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.001753 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6cafc64-e623-435f-b1e3-161451556900","Type":"ContainerDied","Data":"d46276188b1601f8f163be4f3dd01222b83d4a37a4e0bf9f5f763efbd40c0755"} Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.004066 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a4a51e5cb0192944a5161c7161929fafd52e1818107c3be7209a934df53ecd6d"} Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.037408 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e1fbd5cba717b1745ad80d42904d7e649d796571f10c5e1a08eb0b5d4c33c204"} Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.037585 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.040586 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e8f1b8f54c7fc7c11c1aacb8978d923afc559a7ad6b80199399bd698006e58db"} Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.055577 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4fkh6" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.263478 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:28 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:28 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:28 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.263602 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.462177 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 00:08:28 crc kubenswrapper[4811]: E1203 00:08:28.462412 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc347b57-7a13-480c-b630-f2486ce233fc" containerName="collect-profiles" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.462426 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc347b57-7a13-480c-b630-f2486ce233fc" containerName="collect-profiles" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.462544 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc347b57-7a13-480c-b630-f2486ce233fc" containerName="collect-profiles" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.462954 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.464998 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.468911 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.481513 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.602625 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b8ecd93-7946-4813-8a1d-8a31f64b66b8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1b8ecd93-7946-4813-8a1d-8a31f64b66b8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.602681 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b8ecd93-7946-4813-8a1d-8a31f64b66b8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1b8ecd93-7946-4813-8a1d-8a31f64b66b8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.704229 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b8ecd93-7946-4813-8a1d-8a31f64b66b8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1b8ecd93-7946-4813-8a1d-8a31f64b66b8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.704367 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b8ecd93-7946-4813-8a1d-8a31f64b66b8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1b8ecd93-7946-4813-8a1d-8a31f64b66b8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.704665 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b8ecd93-7946-4813-8a1d-8a31f64b66b8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1b8ecd93-7946-4813-8a1d-8a31f64b66b8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.725463 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b8ecd93-7946-4813-8a1d-8a31f64b66b8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1b8ecd93-7946-4813-8a1d-8a31f64b66b8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:28 crc kubenswrapper[4811]: I1203 00:08:28.831417 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:29 crc kubenswrapper[4811]: I1203 00:08:29.262564 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:29 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:29 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:29 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:29 crc kubenswrapper[4811]: I1203 00:08:29.263474 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:29 crc kubenswrapper[4811]: I1203 00:08:29.307632 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 00:08:29 crc kubenswrapper[4811]: I1203 00:08:29.572255 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:29 crc kubenswrapper[4811]: I1203 00:08:29.723443 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6cafc64-e623-435f-b1e3-161451556900-kubelet-dir\") pod \"e6cafc64-e623-435f-b1e3-161451556900\" (UID: \"e6cafc64-e623-435f-b1e3-161451556900\") " Dec 03 00:08:29 crc kubenswrapper[4811]: I1203 00:08:29.723491 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6cafc64-e623-435f-b1e3-161451556900-kube-api-access\") pod \"e6cafc64-e623-435f-b1e3-161451556900\" (UID: \"e6cafc64-e623-435f-b1e3-161451556900\") " Dec 03 00:08:29 crc kubenswrapper[4811]: I1203 00:08:29.723561 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6cafc64-e623-435f-b1e3-161451556900-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e6cafc64-e623-435f-b1e3-161451556900" (UID: "e6cafc64-e623-435f-b1e3-161451556900"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:08:29 crc kubenswrapper[4811]: I1203 00:08:29.723932 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6cafc64-e623-435f-b1e3-161451556900-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:08:29 crc kubenswrapper[4811]: I1203 00:08:29.731653 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6cafc64-e623-435f-b1e3-161451556900-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e6cafc64-e623-435f-b1e3-161451556900" (UID: "e6cafc64-e623-435f-b1e3-161451556900"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:08:29 crc kubenswrapper[4811]: I1203 00:08:29.827430 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6cafc64-e623-435f-b1e3-161451556900-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:08:30 crc kubenswrapper[4811]: I1203 00:08:30.097192 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1b8ecd93-7946-4813-8a1d-8a31f64b66b8","Type":"ContainerStarted","Data":"94cb3bcc3133a5c018634ed036626e70b6f2ac725d89274724d7d19069eb4937"} Dec 03 00:08:30 crc kubenswrapper[4811]: I1203 00:08:30.109187 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6cafc64-e623-435f-b1e3-161451556900","Type":"ContainerDied","Data":"dff6fb53e2a0b7846c1a450e2fe00e803656e0d8c82adaa5aaff559e20c5ed72"} Dec 03 00:08:30 crc kubenswrapper[4811]: I1203 00:08:30.109285 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dff6fb53e2a0b7846c1a450e2fe00e803656e0d8c82adaa5aaff559e20c5ed72" Dec 03 00:08:30 crc kubenswrapper[4811]: I1203 00:08:30.109226 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:30 crc kubenswrapper[4811]: I1203 00:08:30.261642 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:30 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:30 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:30 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:30 crc kubenswrapper[4811]: I1203 00:08:30.261712 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:31 crc kubenswrapper[4811]: I1203 00:08:31.133430 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1b8ecd93-7946-4813-8a1d-8a31f64b66b8","Type":"ContainerStarted","Data":"60954f6099ee2e787f078e958408902818438174ebf77e72ec9fb1475d74e335"} Dec 03 00:08:31 crc kubenswrapper[4811]: I1203 00:08:31.160233 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.1602082129999998 podStartE2EDuration="3.160208213s" podCreationTimestamp="2025-12-03 00:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:31.155078811 +0000 UTC m=+151.296908283" watchObservedRunningTime="2025-12-03 00:08:31.160208213 +0000 UTC m=+151.302037685" Dec 03 00:08:31 crc kubenswrapper[4811]: I1203 00:08:31.257001 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:31 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:31 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:31 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:31 crc kubenswrapper[4811]: I1203 00:08:31.257492 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:31 crc kubenswrapper[4811]: I1203 00:08:31.925445 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:31 crc kubenswrapper[4811]: I1203 00:08:31.938323 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-wbv94" Dec 03 00:08:32 crc kubenswrapper[4811]: I1203 00:08:32.174458 4811 generic.go:334] "Generic (PLEG): container finished" podID="1b8ecd93-7946-4813-8a1d-8a31f64b66b8" containerID="60954f6099ee2e787f078e958408902818438174ebf77e72ec9fb1475d74e335" exitCode=0 Dec 03 00:08:32 crc kubenswrapper[4811]: I1203 00:08:32.175753 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1b8ecd93-7946-4813-8a1d-8a31f64b66b8","Type":"ContainerDied","Data":"60954f6099ee2e787f078e958408902818438174ebf77e72ec9fb1475d74e335"} Dec 03 00:08:32 crc kubenswrapper[4811]: I1203 00:08:32.258650 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:32 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:32 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:32 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:32 crc kubenswrapper[4811]: I1203 00:08:32.258726 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:32 crc kubenswrapper[4811]: I1203 00:08:32.595318 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-cdlqr" Dec 03 00:08:32 crc kubenswrapper[4811]: I1203 00:08:32.939861 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:08:32 crc kubenswrapper[4811]: I1203 00:08:32.939927 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:08:33 crc kubenswrapper[4811]: I1203 00:08:33.256324 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:33 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:33 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:33 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:33 crc kubenswrapper[4811]: I1203 00:08:33.256540 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:34 crc kubenswrapper[4811]: I1203 00:08:34.257311 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:34 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:34 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:34 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:34 crc kubenswrapper[4811]: I1203 00:08:34.257868 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:35 crc kubenswrapper[4811]: I1203 00:08:35.257407 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:35 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:35 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:35 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:35 crc kubenswrapper[4811]: I1203 00:08:35.257475 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:36 crc kubenswrapper[4811]: I1203 00:08:36.256772 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:36 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:36 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:36 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:36 crc kubenswrapper[4811]: I1203 00:08:36.257475 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:36 crc kubenswrapper[4811]: I1203 00:08:36.716145 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-5j74h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Dec 03 00:08:36 crc kubenswrapper[4811]: I1203 00:08:36.716187 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-5j74h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Dec 03 00:08:36 crc kubenswrapper[4811]: I1203 00:08:36.716226 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5j74h" podUID="6531f918-708c-4bb8-a418-d09dfb7a8b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Dec 03 00:08:36 crc kubenswrapper[4811]: I1203 00:08:36.716275 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5j74h" podUID="6531f918-708c-4bb8-a418-d09dfb7a8b3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Dec 03 00:08:37 crc kubenswrapper[4811]: I1203 00:08:37.237940 4811 patch_prober.go:28] interesting pod/console-f9d7485db-9jgcw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.37:8443/health\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Dec 03 00:08:37 crc kubenswrapper[4811]: I1203 00:08:37.238336 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9jgcw" podUID="cbb5bcd3-d7ec-42c6-acc2-12d538d5a86d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.37:8443/health\": dial tcp 10.217.0.37:8443: connect: connection refused" Dec 03 00:08:37 crc kubenswrapper[4811]: I1203 00:08:37.257442 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:37 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:37 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:37 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:37 crc kubenswrapper[4811]: I1203 00:08:37.257546 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:38 crc kubenswrapper[4811]: I1203 00:08:38.255990 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:38 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:38 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:38 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:38 crc kubenswrapper[4811]: I1203 00:08:38.256054 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:39 crc kubenswrapper[4811]: I1203 00:08:39.256083 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:39 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Dec 03 00:08:39 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:39 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:39 crc kubenswrapper[4811]: I1203 00:08:39.256538 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:40 crc kubenswrapper[4811]: I1203 00:08:40.257024 4811 patch_prober.go:28] interesting pod/router-default-5444994796-hd7pw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:40 crc kubenswrapper[4811]: [+]has-synced ok Dec 03 00:08:40 crc kubenswrapper[4811]: [+]process-running ok Dec 03 00:08:40 crc kubenswrapper[4811]: healthz check failed Dec 03 00:08:40 crc kubenswrapper[4811]: I1203 00:08:40.257094 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hd7pw" podUID="84658e5e-5af1-49b0-a45a-f7aa7a852a98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:41 crc kubenswrapper[4811]: I1203 00:08:41.244727 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs\") pod \"network-metrics-daemon-5w9pv\" (UID: \"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\") " pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:08:41 crc kubenswrapper[4811]: I1203 00:08:41.251497 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c-metrics-certs\") pod \"network-metrics-daemon-5w9pv\" (UID: \"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c\") " pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:08:41 crc kubenswrapper[4811]: I1203 00:08:41.257848 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:41 crc kubenswrapper[4811]: I1203 00:08:41.267121 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hd7pw" Dec 03 00:08:41 crc kubenswrapper[4811]: I1203 00:08:41.279781 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1b8ecd93-7946-4813-8a1d-8a31f64b66b8","Type":"ContainerDied","Data":"94cb3bcc3133a5c018634ed036626e70b6f2ac725d89274724d7d19069eb4937"} Dec 03 00:08:41 crc kubenswrapper[4811]: I1203 00:08:41.279919 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94cb3bcc3133a5c018634ed036626e70b6f2ac725d89274724d7d19069eb4937" Dec 03 00:08:41 crc kubenswrapper[4811]: I1203 00:08:41.289804 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5w9pv" Dec 03 00:08:41 crc kubenswrapper[4811]: I1203 00:08:41.297721 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:41 crc kubenswrapper[4811]: I1203 00:08:41.352711 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b8ecd93-7946-4813-8a1d-8a31f64b66b8-kube-api-access\") pod \"1b8ecd93-7946-4813-8a1d-8a31f64b66b8\" (UID: \"1b8ecd93-7946-4813-8a1d-8a31f64b66b8\") " Dec 03 00:08:41 crc kubenswrapper[4811]: I1203 00:08:41.363453 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b8ecd93-7946-4813-8a1d-8a31f64b66b8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1b8ecd93-7946-4813-8a1d-8a31f64b66b8" (UID: "1b8ecd93-7946-4813-8a1d-8a31f64b66b8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:08:41 crc kubenswrapper[4811]: I1203 00:08:41.454450 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b8ecd93-7946-4813-8a1d-8a31f64b66b8-kubelet-dir\") pod \"1b8ecd93-7946-4813-8a1d-8a31f64b66b8\" (UID: \"1b8ecd93-7946-4813-8a1d-8a31f64b66b8\") " Dec 03 00:08:41 crc kubenswrapper[4811]: I1203 00:08:41.454812 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b8ecd93-7946-4813-8a1d-8a31f64b66b8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1b8ecd93-7946-4813-8a1d-8a31f64b66b8" (UID: "1b8ecd93-7946-4813-8a1d-8a31f64b66b8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:08:41 crc kubenswrapper[4811]: I1203 00:08:41.454853 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b8ecd93-7946-4813-8a1d-8a31f64b66b8-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:08:41 crc kubenswrapper[4811]: I1203 00:08:41.556020 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b8ecd93-7946-4813-8a1d-8a31f64b66b8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:08:42 crc kubenswrapper[4811]: I1203 00:08:42.285275 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:45 crc kubenswrapper[4811]: I1203 00:08:45.056215 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:08:46 crc kubenswrapper[4811]: I1203 00:08:46.722151 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5j74h" Dec 03 00:08:47 crc kubenswrapper[4811]: I1203 00:08:47.277305 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:47 crc kubenswrapper[4811]: I1203 00:08:47.281214 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9jgcw" Dec 03 00:08:51 crc kubenswrapper[4811]: E1203 00:08:51.636024 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac1cc52b_99ba_456c_8ffe_cebbc7d4e827.slice/crio-9f12ddca80b89cadb1233f88420b3ed6cd3218c4a324a1b928678736645915aa.scope\": RecentStats: unable to find data in memory cache]" Dec 03 00:08:53 crc kubenswrapper[4811]: I1203 00:08:53.416303 4811 generic.go:334] "Generic (PLEG): container finished" podID="ac1cc52b-99ba-456c-8ffe-cebbc7d4e827" containerID="9f12ddca80b89cadb1233f88420b3ed6cd3218c4a324a1b928678736645915aa" exitCode=0 Dec 03 00:08:53 crc kubenswrapper[4811]: I1203 00:08:53.418816 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29412000-7zxph" event={"ID":"ac1cc52b-99ba-456c-8ffe-cebbc7d4e827","Type":"ContainerDied","Data":"9f12ddca80b89cadb1233f88420b3ed6cd3218c4a324a1b928678736645915aa"} Dec 03 00:08:55 crc kubenswrapper[4811]: I1203 00:08:55.342474 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29412000-7zxph" Dec 03 00:08:55 crc kubenswrapper[4811]: I1203 00:08:55.450498 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29412000-7zxph" event={"ID":"ac1cc52b-99ba-456c-8ffe-cebbc7d4e827","Type":"ContainerDied","Data":"8bc65c26eff30c3a4bbdbd5a425f79722814eb2fdd4258d55e7abb1d20712365"} Dec 03 00:08:55 crc kubenswrapper[4811]: I1203 00:08:55.450557 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bc65c26eff30c3a4bbdbd5a425f79722814eb2fdd4258d55e7abb1d20712365" Dec 03 00:08:55 crc kubenswrapper[4811]: I1203 00:08:55.450592 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29412000-7zxph" Dec 03 00:08:55 crc kubenswrapper[4811]: I1203 00:08:55.451098 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac1cc52b-99ba-456c-8ffe-cebbc7d4e827-serviceca\") pod \"ac1cc52b-99ba-456c-8ffe-cebbc7d4e827\" (UID: \"ac1cc52b-99ba-456c-8ffe-cebbc7d4e827\") " Dec 03 00:08:55 crc kubenswrapper[4811]: I1203 00:08:55.451246 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8b8z\" (UniqueName: \"kubernetes.io/projected/ac1cc52b-99ba-456c-8ffe-cebbc7d4e827-kube-api-access-k8b8z\") pod \"ac1cc52b-99ba-456c-8ffe-cebbc7d4e827\" (UID: \"ac1cc52b-99ba-456c-8ffe-cebbc7d4e827\") " Dec 03 00:08:55 crc kubenswrapper[4811]: I1203 00:08:55.452232 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1cc52b-99ba-456c-8ffe-cebbc7d4e827-serviceca" (OuterVolumeSpecName: "serviceca") pod "ac1cc52b-99ba-456c-8ffe-cebbc7d4e827" (UID: "ac1cc52b-99ba-456c-8ffe-cebbc7d4e827"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:08:55 crc kubenswrapper[4811]: I1203 00:08:55.488011 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1cc52b-99ba-456c-8ffe-cebbc7d4e827-kube-api-access-k8b8z" (OuterVolumeSpecName: "kube-api-access-k8b8z") pod "ac1cc52b-99ba-456c-8ffe-cebbc7d4e827" (UID: "ac1cc52b-99ba-456c-8ffe-cebbc7d4e827"). InnerVolumeSpecName "kube-api-access-k8b8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:08:55 crc kubenswrapper[4811]: I1203 00:08:55.552926 4811 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac1cc52b-99ba-456c-8ffe-cebbc7d4e827-serviceca\") on node \"crc\" DevicePath \"\"" Dec 03 00:08:55 crc kubenswrapper[4811]: I1203 00:08:55.552977 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8b8z\" (UniqueName: \"kubernetes.io/projected/ac1cc52b-99ba-456c-8ffe-cebbc7d4e827-kube-api-access-k8b8z\") on node \"crc\" DevicePath \"\"" Dec 03 00:08:56 crc kubenswrapper[4811]: I1203 00:08:56.760968 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fbzcm" Dec 03 00:09:00 crc kubenswrapper[4811]: E1203 00:09:00.336877 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 03 00:09:00 crc kubenswrapper[4811]: E1203 00:09:00.337946 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptxkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6cr44_openshift-marketplace(7f133170-9779-4a12-86d0-43c6e9c16da8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:09:00 crc kubenswrapper[4811]: E1203 00:09:00.339193 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6cr44" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" Dec 03 00:09:00 crc kubenswrapper[4811]: E1203 00:09:00.342157 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 03 00:09:00 crc kubenswrapper[4811]: E1203 00:09:00.342400 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7kpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6dpqh_openshift-marketplace(08cf8c1a-2191-4e7c-bba4-2ecc51132d8d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:09:00 crc kubenswrapper[4811]: E1203 00:09:00.343604 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6dpqh" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" Dec 03 00:09:00 crc kubenswrapper[4811]: E1203 00:09:00.345335 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 03 00:09:00 crc kubenswrapper[4811]: E1203 00:09:00.345510 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hnn7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bqlcb_openshift-marketplace(41ca1166-555e-4be2-b998-59bad45528df): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:09:00 crc kubenswrapper[4811]: E1203 00:09:00.346703 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-bqlcb" podUID="41ca1166-555e-4be2-b998-59bad45528df" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.271715 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 00:09:01 crc kubenswrapper[4811]: E1203 00:09:01.272545 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1cc52b-99ba-456c-8ffe-cebbc7d4e827" containerName="image-pruner" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.272564 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1cc52b-99ba-456c-8ffe-cebbc7d4e827" containerName="image-pruner" Dec 03 00:09:01 crc kubenswrapper[4811]: E1203 00:09:01.272582 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6cafc64-e623-435f-b1e3-161451556900" containerName="pruner" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.272590 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6cafc64-e623-435f-b1e3-161451556900" containerName="pruner" Dec 03 00:09:01 crc kubenswrapper[4811]: E1203 00:09:01.272606 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8ecd93-7946-4813-8a1d-8a31f64b66b8" containerName="pruner" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.272614 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8ecd93-7946-4813-8a1d-8a31f64b66b8" containerName="pruner" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.272737 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6cafc64-e623-435f-b1e3-161451556900" containerName="pruner" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.272754 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1cc52b-99ba-456c-8ffe-cebbc7d4e827" containerName="image-pruner" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.272766 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b8ecd93-7946-4813-8a1d-8a31f64b66b8" containerName="pruner" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.273346 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.273853 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.289639 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.289989 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.426849 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d5fd806-9d16-4232-9ea9-2bee7e895684-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7d5fd806-9d16-4232-9ea9-2bee7e895684\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.426919 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5fd806-9d16-4232-9ea9-2bee7e895684-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7d5fd806-9d16-4232-9ea9-2bee7e895684\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.527633 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d5fd806-9d16-4232-9ea9-2bee7e895684-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7d5fd806-9d16-4232-9ea9-2bee7e895684\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.527679 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5fd806-9d16-4232-9ea9-2bee7e895684-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7d5fd806-9d16-4232-9ea9-2bee7e895684\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.527755 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d5fd806-9d16-4232-9ea9-2bee7e895684-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7d5fd806-9d16-4232-9ea9-2bee7e895684\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.549966 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5fd806-9d16-4232-9ea9-2bee7e895684-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7d5fd806-9d16-4232-9ea9-2bee7e895684\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:01 crc kubenswrapper[4811]: E1203 00:09:01.588450 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bqlcb" podUID="41ca1166-555e-4be2-b998-59bad45528df" Dec 03 00:09:01 crc kubenswrapper[4811]: E1203 00:09:01.588512 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6cr44" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" Dec 03 00:09:01 crc kubenswrapper[4811]: E1203 00:09:01.588529 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6dpqh" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" Dec 03 00:09:01 crc kubenswrapper[4811]: I1203 00:09:01.618222 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:01 crc kubenswrapper[4811]: E1203 00:09:01.746203 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 03 00:09:01 crc kubenswrapper[4811]: E1203 00:09:01.746852 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hgmsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xj4zn_openshift-marketplace(3f0c5586-e964-4734-a361-bcc6d34dfc8b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:09:01 crc kubenswrapper[4811]: E1203 00:09:01.750063 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xj4zn" podUID="3f0c5586-e964-4734-a361-bcc6d34dfc8b" Dec 03 00:09:01 crc kubenswrapper[4811]: E1203 00:09:01.855276 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 03 00:09:01 crc kubenswrapper[4811]: E1203 00:09:01.855562 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnld2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-r5kqq_openshift-marketplace(a12a877a-9029-4eed-919f-6b21efa268ab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:09:01 crc kubenswrapper[4811]: E1203 00:09:01.857017 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-r5kqq" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" Dec 03 00:09:02 crc kubenswrapper[4811]: I1203 00:09:02.141390 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 00:09:02 crc kubenswrapper[4811]: I1203 00:09:02.227745 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5w9pv"] Dec 03 00:09:02 crc kubenswrapper[4811]: W1203 00:09:02.291621 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccad4150_e89e_4a2c_8b0f_5ab1f9429f2c.slice/crio-b345ef3488270f80c1d3e6759cccfc772a2243af2b01cee2e35ca2292330981d WatchSource:0}: Error finding container b345ef3488270f80c1d3e6759cccfc772a2243af2b01cee2e35ca2292330981d: Status 404 returned error can't find the container with id b345ef3488270f80c1d3e6759cccfc772a2243af2b01cee2e35ca2292330981d Dec 03 00:09:02 crc kubenswrapper[4811]: I1203 00:09:02.549205 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" event={"ID":"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c","Type":"ContainerStarted","Data":"b345ef3488270f80c1d3e6759cccfc772a2243af2b01cee2e35ca2292330981d"} Dec 03 00:09:02 crc kubenswrapper[4811]: I1203 00:09:02.554796 4811 generic.go:334] "Generic (PLEG): container finished" podID="7ae6eb53-bd86-41f5-bc17-3c5eb1220af3" containerID="7e5b56e08e29036ff09f45970f77a984b0911a855bb080ad3423a9ece08685c9" exitCode=0 Dec 03 00:09:02 crc kubenswrapper[4811]: I1203 00:09:02.555156 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kqpch" event={"ID":"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3","Type":"ContainerDied","Data":"7e5b56e08e29036ff09f45970f77a984b0911a855bb080ad3423a9ece08685c9"} Dec 03 00:09:02 crc kubenswrapper[4811]: I1203 00:09:02.563187 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7d5fd806-9d16-4232-9ea9-2bee7e895684","Type":"ContainerStarted","Data":"8f6592f06e28775d7ed72ba3a113ebdb61083c402601fb0660788ec483a6cf67"} Dec 03 00:09:02 crc kubenswrapper[4811]: I1203 00:09:02.568589 4811 generic.go:334] "Generic (PLEG): container finished" podID="43747cdd-50ef-43df-b98d-a4d855984bb3" containerID="295af902fb079365ec1da2d58423cd1c7ebe017aad8b9f2608ff4aad6020fc51" exitCode=0 Dec 03 00:09:02 crc kubenswrapper[4811]: I1203 00:09:02.568672 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ms8z7" event={"ID":"43747cdd-50ef-43df-b98d-a4d855984bb3","Type":"ContainerDied","Data":"295af902fb079365ec1da2d58423cd1c7ebe017aad8b9f2608ff4aad6020fc51"} Dec 03 00:09:02 crc kubenswrapper[4811]: I1203 00:09:02.574737 4811 generic.go:334] "Generic (PLEG): container finished" podID="e02c207e-d2f6-4c42-8e80-8967413395c0" containerID="4499585db793f671a98c2a882fc7d685adb0b88f879d47ba810fda3b2dc23ad4" exitCode=0 Dec 03 00:09:02 crc kubenswrapper[4811]: I1203 00:09:02.575097 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfsfv" event={"ID":"e02c207e-d2f6-4c42-8e80-8967413395c0","Type":"ContainerDied","Data":"4499585db793f671a98c2a882fc7d685adb0b88f879d47ba810fda3b2dc23ad4"} Dec 03 00:09:02 crc kubenswrapper[4811]: E1203 00:09:02.578093 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-r5kqq" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" Dec 03 00:09:02 crc kubenswrapper[4811]: E1203 00:09:02.579022 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xj4zn" podUID="3f0c5586-e964-4734-a361-bcc6d34dfc8b" Dec 03 00:09:02 crc kubenswrapper[4811]: I1203 00:09:02.940450 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:09:02 crc kubenswrapper[4811]: I1203 00:09:02.940529 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:09:03 crc kubenswrapper[4811]: I1203 00:09:03.587663 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfsfv" event={"ID":"e02c207e-d2f6-4c42-8e80-8967413395c0","Type":"ContainerStarted","Data":"1e19cdf1de3461f8084eb4a7eff8624c5d40ecf2b256cb99d0aa0df212bdeb04"} Dec 03 00:09:03 crc kubenswrapper[4811]: I1203 00:09:03.589951 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" event={"ID":"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c","Type":"ContainerStarted","Data":"a7bf6503d3c81cb09bcf7fe8b55f86df330756c24e551fdea689183338b1dc01"} Dec 03 00:09:03 crc kubenswrapper[4811]: I1203 00:09:03.590004 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5w9pv" event={"ID":"ccad4150-e89e-4a2c-8b0f-5ab1f9429f2c","Type":"ContainerStarted","Data":"5faa54ee4dc9c38a96692f187eec9e50115081c25478c685c613f8c9632a0015"} Dec 03 00:09:03 crc kubenswrapper[4811]: I1203 00:09:03.592914 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kqpch" event={"ID":"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3","Type":"ContainerStarted","Data":"325e7330930684d16ed929e20b0f67df1a0efbb860e2cef3e48814c7d6f3ce0f"} Dec 03 00:09:03 crc kubenswrapper[4811]: I1203 00:09:03.595808 4811 generic.go:334] "Generic (PLEG): container finished" podID="7d5fd806-9d16-4232-9ea9-2bee7e895684" containerID="1c38e1500e8e85c947a5f8df45b81d5a3941888950a7bb3cb274eb94effd19ee" exitCode=0 Dec 03 00:09:03 crc kubenswrapper[4811]: I1203 00:09:03.595856 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7d5fd806-9d16-4232-9ea9-2bee7e895684","Type":"ContainerDied","Data":"1c38e1500e8e85c947a5f8df45b81d5a3941888950a7bb3cb274eb94effd19ee"} Dec 03 00:09:03 crc kubenswrapper[4811]: I1203 00:09:03.607516 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sfsfv" podStartSLOduration=3.926996918 podStartE2EDuration="42.607501121s" podCreationTimestamp="2025-12-03 00:08:21 +0000 UTC" firstStartedPulling="2025-12-03 00:08:24.524730308 +0000 UTC m=+144.666559780" lastFinishedPulling="2025-12-03 00:09:03.205234491 +0000 UTC m=+183.347063983" observedRunningTime="2025-12-03 00:09:03.605751477 +0000 UTC m=+183.747580949" watchObservedRunningTime="2025-12-03 00:09:03.607501121 +0000 UTC m=+183.749330593" Dec 03 00:09:03 crc kubenswrapper[4811]: I1203 00:09:03.674425 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5w9pv" podStartSLOduration=165.674404154 podStartE2EDuration="2m45.674404154s" podCreationTimestamp="2025-12-03 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:09:03.653499019 +0000 UTC m=+183.795328491" watchObservedRunningTime="2025-12-03 00:09:03.674404154 +0000 UTC m=+183.816233626" Dec 03 00:09:04 crc kubenswrapper[4811]: I1203 00:09:04.614322 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ms8z7" event={"ID":"43747cdd-50ef-43df-b98d-a4d855984bb3","Type":"ContainerStarted","Data":"b7fe22ee668a14a52302ca06ae4348d8ce6079f5efc96d1abc135ea03322d3cc"} Dec 03 00:09:04 crc kubenswrapper[4811]: I1203 00:09:04.636874 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ms8z7" podStartSLOduration=4.736830534 podStartE2EDuration="43.636856497s" podCreationTimestamp="2025-12-03 00:08:21 +0000 UTC" firstStartedPulling="2025-12-03 00:08:24.692481014 +0000 UTC m=+144.834310486" lastFinishedPulling="2025-12-03 00:09:03.592506977 +0000 UTC m=+183.734336449" observedRunningTime="2025-12-03 00:09:04.635729838 +0000 UTC m=+184.777559310" watchObservedRunningTime="2025-12-03 00:09:04.636856497 +0000 UTC m=+184.778685969" Dec 03 00:09:04 crc kubenswrapper[4811]: I1203 00:09:04.638078 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kqpch" podStartSLOduration=6.289484057 podStartE2EDuration="43.638071498s" podCreationTimestamp="2025-12-03 00:08:21 +0000 UTC" firstStartedPulling="2025-12-03 00:08:25.827552596 +0000 UTC m=+145.969382058" lastFinishedPulling="2025-12-03 00:09:03.176140017 +0000 UTC m=+183.317969499" observedRunningTime="2025-12-03 00:09:03.67502415 +0000 UTC m=+183.816853622" watchObservedRunningTime="2025-12-03 00:09:04.638071498 +0000 UTC m=+184.779900970" Dec 03 00:09:04 crc kubenswrapper[4811]: I1203 00:09:04.898966 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4ds94"] Dec 03 00:09:05 crc kubenswrapper[4811]: I1203 00:09:05.165896 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:05 crc kubenswrapper[4811]: I1203 00:09:05.314216 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5fd806-9d16-4232-9ea9-2bee7e895684-kube-api-access\") pod \"7d5fd806-9d16-4232-9ea9-2bee7e895684\" (UID: \"7d5fd806-9d16-4232-9ea9-2bee7e895684\") " Dec 03 00:09:05 crc kubenswrapper[4811]: I1203 00:09:05.314361 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d5fd806-9d16-4232-9ea9-2bee7e895684-kubelet-dir\") pod \"7d5fd806-9d16-4232-9ea9-2bee7e895684\" (UID: \"7d5fd806-9d16-4232-9ea9-2bee7e895684\") " Dec 03 00:09:05 crc kubenswrapper[4811]: I1203 00:09:05.314606 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5fd806-9d16-4232-9ea9-2bee7e895684-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7d5fd806-9d16-4232-9ea9-2bee7e895684" (UID: "7d5fd806-9d16-4232-9ea9-2bee7e895684"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:09:05 crc kubenswrapper[4811]: I1203 00:09:05.320493 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5fd806-9d16-4232-9ea9-2bee7e895684-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7d5fd806-9d16-4232-9ea9-2bee7e895684" (UID: "7d5fd806-9d16-4232-9ea9-2bee7e895684"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:09:05 crc kubenswrapper[4811]: I1203 00:09:05.415651 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d5fd806-9d16-4232-9ea9-2bee7e895684-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:05 crc kubenswrapper[4811]: I1203 00:09:05.415692 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5fd806-9d16-4232-9ea9-2bee7e895684-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:05 crc kubenswrapper[4811]: I1203 00:09:05.628435 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:05 crc kubenswrapper[4811]: I1203 00:09:05.628428 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7d5fd806-9d16-4232-9ea9-2bee7e895684","Type":"ContainerDied","Data":"8f6592f06e28775d7ed72ba3a113ebdb61083c402601fb0660788ec483a6cf67"} Dec 03 00:09:05 crc kubenswrapper[4811]: I1203 00:09:05.628495 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f6592f06e28775d7ed72ba3a113ebdb61083c402601fb0660788ec483a6cf67" Dec 03 00:09:06 crc kubenswrapper[4811]: I1203 00:09:06.245526 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.246330 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 00:09:07 crc kubenswrapper[4811]: E1203 00:09:07.246887 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5fd806-9d16-4232-9ea9-2bee7e895684" containerName="pruner" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.246901 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5fd806-9d16-4232-9ea9-2bee7e895684" containerName="pruner" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.247033 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5fd806-9d16-4232-9ea9-2bee7e895684" containerName="pruner" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.247516 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.249746 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.250495 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.278490 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.344123 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d224a153-46f1-4689-96f6-98d13fc54aea-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d224a153-46f1-4689-96f6-98d13fc54aea\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.344186 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d224a153-46f1-4689-96f6-98d13fc54aea-kube-api-access\") pod \"installer-9-crc\" (UID: \"d224a153-46f1-4689-96f6-98d13fc54aea\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.344210 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d224a153-46f1-4689-96f6-98d13fc54aea-var-lock\") pod \"installer-9-crc\" (UID: \"d224a153-46f1-4689-96f6-98d13fc54aea\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.445799 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d224a153-46f1-4689-96f6-98d13fc54aea-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d224a153-46f1-4689-96f6-98d13fc54aea\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.445887 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d224a153-46f1-4689-96f6-98d13fc54aea-kube-api-access\") pod \"installer-9-crc\" (UID: \"d224a153-46f1-4689-96f6-98d13fc54aea\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.445921 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d224a153-46f1-4689-96f6-98d13fc54aea-var-lock\") pod \"installer-9-crc\" (UID: \"d224a153-46f1-4689-96f6-98d13fc54aea\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.445964 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d224a153-46f1-4689-96f6-98d13fc54aea-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d224a153-46f1-4689-96f6-98d13fc54aea\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.446119 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d224a153-46f1-4689-96f6-98d13fc54aea-var-lock\") pod \"installer-9-crc\" (UID: \"d224a153-46f1-4689-96f6-98d13fc54aea\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.477294 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d224a153-46f1-4689-96f6-98d13fc54aea-kube-api-access\") pod \"installer-9-crc\" (UID: \"d224a153-46f1-4689-96f6-98d13fc54aea\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:07 crc kubenswrapper[4811]: I1203 00:09:07.595934 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:08 crc kubenswrapper[4811]: I1203 00:09:08.033552 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 00:09:08 crc kubenswrapper[4811]: W1203 00:09:08.041233 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd224a153_46f1_4689_96f6_98d13fc54aea.slice/crio-d2175f0b59676e9ca5592775a2070a4a0fb3aacca53e9999cd34ac4030ce26fc WatchSource:0}: Error finding container d2175f0b59676e9ca5592775a2070a4a0fb3aacca53e9999cd34ac4030ce26fc: Status 404 returned error can't find the container with id d2175f0b59676e9ca5592775a2070a4a0fb3aacca53e9999cd34ac4030ce26fc Dec 03 00:09:08 crc kubenswrapper[4811]: I1203 00:09:08.652080 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d224a153-46f1-4689-96f6-98d13fc54aea","Type":"ContainerStarted","Data":"d2175f0b59676e9ca5592775a2070a4a0fb3aacca53e9999cd34ac4030ce26fc"} Dec 03 00:09:09 crc kubenswrapper[4811]: I1203 00:09:09.658920 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d224a153-46f1-4689-96f6-98d13fc54aea","Type":"ContainerStarted","Data":"868a2d1058b62d97e9137c8500a3098271020df70310a5667b6ae3d9cd845f31"} Dec 03 00:09:09 crc kubenswrapper[4811]: I1203 00:09:09.676067 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.67604724 podStartE2EDuration="2.67604724s" podCreationTimestamp="2025-12-03 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:09:09.669760919 +0000 UTC m=+189.811590401" watchObservedRunningTime="2025-12-03 00:09:09.67604724 +0000 UTC m=+189.817876712" Dec 03 00:09:11 crc kubenswrapper[4811]: I1203 00:09:11.574243 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:09:11 crc kubenswrapper[4811]: I1203 00:09:11.574566 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:09:11 crc kubenswrapper[4811]: I1203 00:09:11.641215 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:09:11 crc kubenswrapper[4811]: I1203 00:09:11.705970 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:09:11 crc kubenswrapper[4811]: I1203 00:09:11.706029 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:09:11 crc kubenswrapper[4811]: I1203 00:09:11.711636 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:09:11 crc kubenswrapper[4811]: I1203 00:09:11.763308 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:09:12 crc kubenswrapper[4811]: I1203 00:09:12.149243 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:09:12 crc kubenswrapper[4811]: I1203 00:09:12.149619 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:09:12 crc kubenswrapper[4811]: I1203 00:09:12.198227 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:09:12 crc kubenswrapper[4811]: I1203 00:09:12.715003 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:09:12 crc kubenswrapper[4811]: I1203 00:09:12.722824 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:09:14 crc kubenswrapper[4811]: I1203 00:09:14.471404 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kqpch"] Dec 03 00:09:15 crc kubenswrapper[4811]: I1203 00:09:15.692554 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kqpch" podUID="7ae6eb53-bd86-41f5-bc17-3c5eb1220af3" containerName="registry-server" containerID="cri-o://325e7330930684d16ed929e20b0f67df1a0efbb860e2cef3e48814c7d6f3ce0f" gracePeriod=2 Dec 03 00:09:16 crc kubenswrapper[4811]: I1203 00:09:16.699435 4811 generic.go:334] "Generic (PLEG): container finished" podID="7ae6eb53-bd86-41f5-bc17-3c5eb1220af3" containerID="325e7330930684d16ed929e20b0f67df1a0efbb860e2cef3e48814c7d6f3ce0f" exitCode=0 Dec 03 00:09:16 crc kubenswrapper[4811]: I1203 00:09:16.699494 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kqpch" event={"ID":"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3","Type":"ContainerDied","Data":"325e7330930684d16ed929e20b0f67df1a0efbb860e2cef3e48814c7d6f3ce0f"} Dec 03 00:09:17 crc kubenswrapper[4811]: I1203 00:09:17.050182 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:09:17 crc kubenswrapper[4811]: I1203 00:09:17.176495 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-utilities\") pod \"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3\" (UID: \"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3\") " Dec 03 00:09:17 crc kubenswrapper[4811]: I1203 00:09:17.176629 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw4ms\" (UniqueName: \"kubernetes.io/projected/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-kube-api-access-hw4ms\") pod \"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3\" (UID: \"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3\") " Dec 03 00:09:17 crc kubenswrapper[4811]: I1203 00:09:17.176669 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-catalog-content\") pod \"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3\" (UID: \"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3\") " Dec 03 00:09:17 crc kubenswrapper[4811]: I1203 00:09:17.179506 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-utilities" (OuterVolumeSpecName: "utilities") pod "7ae6eb53-bd86-41f5-bc17-3c5eb1220af3" (UID: "7ae6eb53-bd86-41f5-bc17-3c5eb1220af3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:09:17 crc kubenswrapper[4811]: I1203 00:09:17.184748 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-kube-api-access-hw4ms" (OuterVolumeSpecName: "kube-api-access-hw4ms") pod "7ae6eb53-bd86-41f5-bc17-3c5eb1220af3" (UID: "7ae6eb53-bd86-41f5-bc17-3c5eb1220af3"). InnerVolumeSpecName "kube-api-access-hw4ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:09:17 crc kubenswrapper[4811]: I1203 00:09:17.235955 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ae6eb53-bd86-41f5-bc17-3c5eb1220af3" (UID: "7ae6eb53-bd86-41f5-bc17-3c5eb1220af3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:09:17 crc kubenswrapper[4811]: I1203 00:09:17.279161 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:17 crc kubenswrapper[4811]: I1203 00:09:17.279499 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw4ms\" (UniqueName: \"kubernetes.io/projected/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-kube-api-access-hw4ms\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:17 crc kubenswrapper[4811]: I1203 00:09:17.279513 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:17 crc kubenswrapper[4811]: I1203 00:09:17.711051 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kqpch" event={"ID":"7ae6eb53-bd86-41f5-bc17-3c5eb1220af3","Type":"ContainerDied","Data":"f1f4947ebd67d5a452debc99195c8b6b2e9c2f7dc52e3a06d90e0efa8a8bb127"} Dec 03 00:09:17 crc kubenswrapper[4811]: I1203 00:09:17.711120 4811 scope.go:117] "RemoveContainer" containerID="325e7330930684d16ed929e20b0f67df1a0efbb860e2cef3e48814c7d6f3ce0f" Dec 03 00:09:17 crc kubenswrapper[4811]: I1203 00:09:17.711282 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kqpch" Dec 03 00:09:17 crc kubenswrapper[4811]: I1203 00:09:17.743058 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kqpch"] Dec 03 00:09:17 crc kubenswrapper[4811]: I1203 00:09:17.748965 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kqpch"] Dec 03 00:09:18 crc kubenswrapper[4811]: I1203 00:09:18.130985 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae6eb53-bd86-41f5-bc17-3c5eb1220af3" path="/var/lib/kubelet/pods/7ae6eb53-bd86-41f5-bc17-3c5eb1220af3/volumes" Dec 03 00:09:23 crc kubenswrapper[4811]: I1203 00:09:23.105926 4811 scope.go:117] "RemoveContainer" containerID="7e5b56e08e29036ff09f45970f77a984b0911a855bb080ad3423a9ece08685c9" Dec 03 00:09:26 crc kubenswrapper[4811]: I1203 00:09:26.561749 4811 scope.go:117] "RemoveContainer" containerID="b1374ce92de3a5fcdb0db6367a2df52a231b5a4e649df90c9fb37006af45a6c7" Dec 03 00:09:29 crc kubenswrapper[4811]: I1203 00:09:29.944427 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" podUID="97cfb89f-0902-42fd-9e4d-a1459f2f2511" containerName="oauth-openshift" containerID="cri-o://2a0f2e81e2333baae33f73d31e3a99d2bc68f20fab9110c2c17dcc91be36c82b" gracePeriod=15 Dec 03 00:09:30 crc kubenswrapper[4811]: I1203 00:09:30.799848 4811 generic.go:334] "Generic (PLEG): container finished" podID="97cfb89f-0902-42fd-9e4d-a1459f2f2511" containerID="2a0f2e81e2333baae33f73d31e3a99d2bc68f20fab9110c2c17dcc91be36c82b" exitCode=0 Dec 03 00:09:30 crc kubenswrapper[4811]: I1203 00:09:30.799954 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" event={"ID":"97cfb89f-0902-42fd-9e4d-a1459f2f2511","Type":"ContainerDied","Data":"2a0f2e81e2333baae33f73d31e3a99d2bc68f20fab9110c2c17dcc91be36c82b"} Dec 03 00:09:30 crc kubenswrapper[4811]: I1203 00:09:30.802086 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6cr44" event={"ID":"7f133170-9779-4a12-86d0-43c6e9c16da8","Type":"ContainerStarted","Data":"1fb3d8e556512b4d1d00ffd03bd37a5ec4925d0343fb1959af47520bcd25d5c4"} Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.303329 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.342658 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79558c6cc6-rnc5h"] Dec 03 00:09:31 crc kubenswrapper[4811]: E1203 00:09:31.343345 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae6eb53-bd86-41f5-bc17-3c5eb1220af3" containerName="extract-utilities" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.343439 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae6eb53-bd86-41f5-bc17-3c5eb1220af3" containerName="extract-utilities" Dec 03 00:09:31 crc kubenswrapper[4811]: E1203 00:09:31.343523 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae6eb53-bd86-41f5-bc17-3c5eb1220af3" containerName="extract-content" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.343603 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae6eb53-bd86-41f5-bc17-3c5eb1220af3" containerName="extract-content" Dec 03 00:09:31 crc kubenswrapper[4811]: E1203 00:09:31.343673 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae6eb53-bd86-41f5-bc17-3c5eb1220af3" containerName="registry-server" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.343738 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae6eb53-bd86-41f5-bc17-3c5eb1220af3" containerName="registry-server" Dec 03 00:09:31 crc kubenswrapper[4811]: E1203 00:09:31.343809 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97cfb89f-0902-42fd-9e4d-a1459f2f2511" containerName="oauth-openshift" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.343887 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="97cfb89f-0902-42fd-9e4d-a1459f2f2511" containerName="oauth-openshift" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.344067 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="97cfb89f-0902-42fd-9e4d-a1459f2f2511" containerName="oauth-openshift" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.344147 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae6eb53-bd86-41f5-bc17-3c5eb1220af3" containerName="registry-server" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.344728 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.358452 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79558c6cc6-rnc5h"] Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.380160 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-ocp-branding-template\") pod \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.380220 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97cfb89f-0902-42fd-9e4d-a1459f2f2511-audit-dir\") pod \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.380247 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-idp-0-file-data\") pod \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.380280 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-cliconfig\") pod \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.380307 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-login\") pod \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.380327 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-router-certs\") pod \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.380362 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-serving-cert\") pod \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.380381 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwbt4\" (UniqueName: \"kubernetes.io/projected/97cfb89f-0902-42fd-9e4d-a1459f2f2511-kube-api-access-wwbt4\") pod \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.380412 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-error\") pod \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.380433 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-trusted-ca-bundle\") pod \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.380463 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-service-ca\") pod \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.380488 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-session\") pod \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.380525 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-provider-selection\") pod \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.380544 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-audit-policies\") pod \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\" (UID: \"97cfb89f-0902-42fd-9e4d-a1459f2f2511\") " Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.381520 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "97cfb89f-0902-42fd-9e4d-a1459f2f2511" (UID: "97cfb89f-0902-42fd-9e4d-a1459f2f2511"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.383719 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "97cfb89f-0902-42fd-9e4d-a1459f2f2511" (UID: "97cfb89f-0902-42fd-9e4d-a1459f2f2511"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.385482 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97cfb89f-0902-42fd-9e4d-a1459f2f2511-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "97cfb89f-0902-42fd-9e4d-a1459f2f2511" (UID: "97cfb89f-0902-42fd-9e4d-a1459f2f2511"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.385894 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "97cfb89f-0902-42fd-9e4d-a1459f2f2511" (UID: "97cfb89f-0902-42fd-9e4d-a1459f2f2511"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.386441 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "97cfb89f-0902-42fd-9e4d-a1459f2f2511" (UID: "97cfb89f-0902-42fd-9e4d-a1459f2f2511"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.395826 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97cfb89f-0902-42fd-9e4d-a1459f2f2511-kube-api-access-wwbt4" (OuterVolumeSpecName: "kube-api-access-wwbt4") pod "97cfb89f-0902-42fd-9e4d-a1459f2f2511" (UID: "97cfb89f-0902-42fd-9e4d-a1459f2f2511"). InnerVolumeSpecName "kube-api-access-wwbt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.398633 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "97cfb89f-0902-42fd-9e4d-a1459f2f2511" (UID: "97cfb89f-0902-42fd-9e4d-a1459f2f2511"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.401190 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "97cfb89f-0902-42fd-9e4d-a1459f2f2511" (UID: "97cfb89f-0902-42fd-9e4d-a1459f2f2511"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.401977 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "97cfb89f-0902-42fd-9e4d-a1459f2f2511" (UID: "97cfb89f-0902-42fd-9e4d-a1459f2f2511"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.402212 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "97cfb89f-0902-42fd-9e4d-a1459f2f2511" (UID: "97cfb89f-0902-42fd-9e4d-a1459f2f2511"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.402451 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "97cfb89f-0902-42fd-9e4d-a1459f2f2511" (UID: "97cfb89f-0902-42fd-9e4d-a1459f2f2511"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.405280 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "97cfb89f-0902-42fd-9e4d-a1459f2f2511" (UID: "97cfb89f-0902-42fd-9e4d-a1459f2f2511"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.406297 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "97cfb89f-0902-42fd-9e4d-a1459f2f2511" (UID: "97cfb89f-0902-42fd-9e4d-a1459f2f2511"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.410013 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "97cfb89f-0902-42fd-9e4d-a1459f2f2511" (UID: "97cfb89f-0902-42fd-9e4d-a1459f2f2511"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.481942 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.481995 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.482330 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.482404 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.482473 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.482563 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-session\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.482602 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-user-template-error\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.482767 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-user-template-login\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.482806 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw7gv\" (UniqueName: \"kubernetes.io/projected/3dd1a628-c5cc-43c7-9197-c60f676df1b6-kube-api-access-cw7gv\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.482859 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3dd1a628-c5cc-43c7-9197-c60f676df1b6-audit-policies\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.482888 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3dd1a628-c5cc-43c7-9197-c60f676df1b6-audit-dir\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.482920 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.482954 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.482996 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.483075 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.483095 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.483113 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.483128 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.483141 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.483158 4811 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.483170 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.483181 4811 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97cfb89f-0902-42fd-9e4d-a1459f2f2511-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.483194 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.483209 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.483224 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.483240 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.483269 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/97cfb89f-0902-42fd-9e4d-a1459f2f2511-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.483281 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwbt4\" (UniqueName: \"kubernetes.io/projected/97cfb89f-0902-42fd-9e4d-a1459f2f2511-kube-api-access-wwbt4\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.584046 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.584096 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-session\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.584126 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-user-template-error\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.584163 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-user-template-login\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.584185 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw7gv\" (UniqueName: \"kubernetes.io/projected/3dd1a628-c5cc-43c7-9197-c60f676df1b6-kube-api-access-cw7gv\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.584213 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3dd1a628-c5cc-43c7-9197-c60f676df1b6-audit-policies\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.584235 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3dd1a628-c5cc-43c7-9197-c60f676df1b6-audit-dir\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.584274 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.584300 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.584328 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.584356 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.584378 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.584433 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.584457 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.585934 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3dd1a628-c5cc-43c7-9197-c60f676df1b6-audit-policies\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.586276 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3dd1a628-c5cc-43c7-9197-c60f676df1b6-audit-dir\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.586715 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.587147 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.587799 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.592598 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.593518 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-user-template-error\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.593892 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.594708 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.594828 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-user-template-login\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.594952 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-session\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.595372 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.595629 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3dd1a628-c5cc-43c7-9197-c60f676df1b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.615927 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw7gv\" (UniqueName: \"kubernetes.io/projected/3dd1a628-c5cc-43c7-9197-c60f676df1b6-kube-api-access-cw7gv\") pod \"oauth-openshift-79558c6cc6-rnc5h\" (UID: \"3dd1a628-c5cc-43c7-9197-c60f676df1b6\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.727326 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.814928 4811 generic.go:334] "Generic (PLEG): container finished" podID="7f133170-9779-4a12-86d0-43c6e9c16da8" containerID="1fb3d8e556512b4d1d00ffd03bd37a5ec4925d0343fb1959af47520bcd25d5c4" exitCode=0 Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.814993 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6cr44" event={"ID":"7f133170-9779-4a12-86d0-43c6e9c16da8","Type":"ContainerDied","Data":"1fb3d8e556512b4d1d00ffd03bd37a5ec4925d0343fb1959af47520bcd25d5c4"} Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.817773 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" event={"ID":"97cfb89f-0902-42fd-9e4d-a1459f2f2511","Type":"ContainerDied","Data":"cb3efb3ae9c957845b2b027219738842990d9d77d793a223df0aff8f2c81985a"} Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.817856 4811 scope.go:117] "RemoveContainer" containerID="2a0f2e81e2333baae33f73d31e3a99d2bc68f20fab9110c2c17dcc91be36c82b" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.818011 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4ds94" Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.864692 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4ds94"] Dec 03 00:09:31 crc kubenswrapper[4811]: I1203 00:09:31.866830 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4ds94"] Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.126129 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97cfb89f-0902-42fd-9e4d-a1459f2f2511" path="/var/lib/kubelet/pods/97cfb89f-0902-42fd-9e4d-a1459f2f2511/volumes" Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.147446 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79558c6cc6-rnc5h"] Dec 03 00:09:32 crc kubenswrapper[4811]: E1203 00:09:32.190046 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda12a877a_9029_4eed_919f_6b21efa268ab.slice/crio-5fb00dd11182a7202d648861d20310bc990e905fe47ec2645018bbd89e655e84.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda12a877a_9029_4eed_919f_6b21efa268ab.slice/crio-conmon-5fb00dd11182a7202d648861d20310bc990e905fe47ec2645018bbd89e655e84.scope\": RecentStats: unable to find data in memory cache]" Dec 03 00:09:32 crc kubenswrapper[4811]: W1203 00:09:32.215052 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dd1a628_c5cc_43c7_9197_c60f676df1b6.slice/crio-f92a2b6c64e58491d1fd6205029ba0858662daeeaf8984f3ee0a358b18de76ca WatchSource:0}: Error finding container f92a2b6c64e58491d1fd6205029ba0858662daeeaf8984f3ee0a358b18de76ca: Status 404 returned error can't find the container with id f92a2b6c64e58491d1fd6205029ba0858662daeeaf8984f3ee0a358b18de76ca Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.826407 4811 generic.go:334] "Generic (PLEG): container finished" podID="41ca1166-555e-4be2-b998-59bad45528df" containerID="2f6af9c994afa620181edd778102e2b2b073dd701286f236302e0d7fa0b07b4e" exitCode=0 Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.826520 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqlcb" event={"ID":"41ca1166-555e-4be2-b998-59bad45528df","Type":"ContainerDied","Data":"2f6af9c994afa620181edd778102e2b2b073dd701286f236302e0d7fa0b07b4e"} Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.831312 4811 generic.go:334] "Generic (PLEG): container finished" podID="a12a877a-9029-4eed-919f-6b21efa268ab" containerID="5fb00dd11182a7202d648861d20310bc990e905fe47ec2645018bbd89e655e84" exitCode=0 Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.831409 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5kqq" event={"ID":"a12a877a-9029-4eed-919f-6b21efa268ab","Type":"ContainerDied","Data":"5fb00dd11182a7202d648861d20310bc990e905fe47ec2645018bbd89e655e84"} Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.835254 4811 generic.go:334] "Generic (PLEG): container finished" podID="3f0c5586-e964-4734-a361-bcc6d34dfc8b" containerID="d442f42eb337e48916250b3e54403de674c3dcf8e5652a176901c31bbbcdc71f" exitCode=0 Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.835329 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj4zn" event={"ID":"3f0c5586-e964-4734-a361-bcc6d34dfc8b","Type":"ContainerDied","Data":"d442f42eb337e48916250b3e54403de674c3dcf8e5652a176901c31bbbcdc71f"} Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.841721 4811 generic.go:334] "Generic (PLEG): container finished" podID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" containerID="9030ca4144e100662d8ed57ff6a7606f13b6f72c4c59fe23e3cecb86a72d7b3e" exitCode=0 Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.841799 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dpqh" event={"ID":"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d","Type":"ContainerDied","Data":"9030ca4144e100662d8ed57ff6a7606f13b6f72c4c59fe23e3cecb86a72d7b3e"} Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.850500 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6cr44" event={"ID":"7f133170-9779-4a12-86d0-43c6e9c16da8","Type":"ContainerStarted","Data":"fb54e5e39bbc837997761d3330210eabd0e0c60f3160a5f9c3ae6418017524c2"} Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.856397 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" event={"ID":"3dd1a628-c5cc-43c7-9197-c60f676df1b6","Type":"ContainerStarted","Data":"9988cb1883f0e95170095abd6bf7294b45c0a8eddeb6e03010167dc986150f7d"} Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.857464 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" event={"ID":"3dd1a628-c5cc-43c7-9197-c60f676df1b6","Type":"ContainerStarted","Data":"f92a2b6c64e58491d1fd6205029ba0858662daeeaf8984f3ee0a358b18de76ca"} Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.858107 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.864523 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.901624 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6cr44" podStartSLOduration=3.278417606 podStartE2EDuration="1m8.901606579s" podCreationTimestamp="2025-12-03 00:08:24 +0000 UTC" firstStartedPulling="2025-12-03 00:08:26.93105485 +0000 UTC m=+147.072884322" lastFinishedPulling="2025-12-03 00:09:32.554243823 +0000 UTC m=+212.696073295" observedRunningTime="2025-12-03 00:09:32.880886366 +0000 UTC m=+213.022715838" watchObservedRunningTime="2025-12-03 00:09:32.901606579 +0000 UTC m=+213.043436051" Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.940479 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.940536 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.940570 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.941228 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25"} pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.941336 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" containerID="cri-o://84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25" gracePeriod=600 Dec 03 00:09:32 crc kubenswrapper[4811]: I1203 00:09:32.969715 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79558c6cc6-rnc5h" podStartSLOduration=28.969695276 podStartE2EDuration="28.969695276s" podCreationTimestamp="2025-12-03 00:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:09:32.96786187 +0000 UTC m=+213.109691352" watchObservedRunningTime="2025-12-03 00:09:32.969695276 +0000 UTC m=+213.111524748" Dec 03 00:09:33 crc kubenswrapper[4811]: I1203 00:09:33.894622 4811 generic.go:334] "Generic (PLEG): container finished" podID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerID="84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25" exitCode=0 Dec 03 00:09:33 crc kubenswrapper[4811]: I1203 00:09:33.894800 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerDied","Data":"84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25"} Dec 03 00:09:33 crc kubenswrapper[4811]: I1203 00:09:33.895608 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerStarted","Data":"10561c3fa5ec63e76b89f65f6adfa64f4786ff83527fb29ebb98d13b1546c538"} Dec 03 00:09:34 crc kubenswrapper[4811]: I1203 00:09:34.904120 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqlcb" event={"ID":"41ca1166-555e-4be2-b998-59bad45528df","Type":"ContainerStarted","Data":"aa7f4d80e744bc98654863977f7c091a969f9f2ac95f63d12f0580072e19ce9d"} Dec 03 00:09:34 crc kubenswrapper[4811]: I1203 00:09:34.906816 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5kqq" event={"ID":"a12a877a-9029-4eed-919f-6b21efa268ab","Type":"ContainerStarted","Data":"edab6a8d66b8d7e0a5487f294ccb839ed975c7f0db4e04fb4d445f2b1536f1d9"} Dec 03 00:09:34 crc kubenswrapper[4811]: I1203 00:09:34.930781 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bqlcb" podStartSLOduration=4.542621088 podStartE2EDuration="1m10.930759961s" podCreationTimestamp="2025-12-03 00:08:24 +0000 UTC" firstStartedPulling="2025-12-03 00:08:26.920999683 +0000 UTC m=+147.062829155" lastFinishedPulling="2025-12-03 00:09:33.309138556 +0000 UTC m=+213.450968028" observedRunningTime="2025-12-03 00:09:34.928814003 +0000 UTC m=+215.070643475" watchObservedRunningTime="2025-12-03 00:09:34.930759961 +0000 UTC m=+215.072589433" Dec 03 00:09:34 crc kubenswrapper[4811]: I1203 00:09:34.950991 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r5kqq" podStartSLOduration=4.563085401 podStartE2EDuration="1m11.950960442s" podCreationTimestamp="2025-12-03 00:08:23 +0000 UTC" firstStartedPulling="2025-12-03 00:08:25.86052924 +0000 UTC m=+146.002358712" lastFinishedPulling="2025-12-03 00:09:33.248404281 +0000 UTC m=+213.390233753" observedRunningTime="2025-12-03 00:09:34.948785327 +0000 UTC m=+215.090614799" watchObservedRunningTime="2025-12-03 00:09:34.950960442 +0000 UTC m=+215.092789914" Dec 03 00:09:35 crc kubenswrapper[4811]: I1203 00:09:35.554726 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:09:35 crc kubenswrapper[4811]: I1203 00:09:35.554875 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:09:36 crc kubenswrapper[4811]: I1203 00:09:36.609484 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6cr44" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" containerName="registry-server" probeResult="failure" output=< Dec 03 00:09:36 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Dec 03 00:09:36 crc kubenswrapper[4811]: > Dec 03 00:09:37 crc kubenswrapper[4811]: I1203 00:09:37.927693 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj4zn" event={"ID":"3f0c5586-e964-4734-a361-bcc6d34dfc8b","Type":"ContainerStarted","Data":"b5d327540158dfbbe80be66ef25b17a96b63276bb2b7f122e48e283d43d60c07"} Dec 03 00:09:37 crc kubenswrapper[4811]: I1203 00:09:37.930087 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dpqh" event={"ID":"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d","Type":"ContainerStarted","Data":"eb89efa2882e5fb5a36e9b02af5ecf84ffeeea9a3096711ed6e90dc11dd5d0ad"} Dec 03 00:09:37 crc kubenswrapper[4811]: I1203 00:09:37.965845 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xj4zn" podStartSLOduration=6.214976633 podStartE2EDuration="1m14.965817384s" podCreationTimestamp="2025-12-03 00:08:23 +0000 UTC" firstStartedPulling="2025-12-03 00:08:25.879060976 +0000 UTC m=+146.020890438" lastFinishedPulling="2025-12-03 00:09:34.629901717 +0000 UTC m=+214.771731189" observedRunningTime="2025-12-03 00:09:37.952242807 +0000 UTC m=+218.094072299" watchObservedRunningTime="2025-12-03 00:09:37.965817384 +0000 UTC m=+218.107646856" Dec 03 00:09:37 crc kubenswrapper[4811]: I1203 00:09:37.988740 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6dpqh" podStartSLOduration=7.218498609 podStartE2EDuration="1m16.988719411s" podCreationTimestamp="2025-12-03 00:08:21 +0000 UTC" firstStartedPulling="2025-12-03 00:08:24.788493442 +0000 UTC m=+144.930322914" lastFinishedPulling="2025-12-03 00:09:34.558714244 +0000 UTC m=+214.700543716" observedRunningTime="2025-12-03 00:09:37.985089821 +0000 UTC m=+218.126919293" watchObservedRunningTime="2025-12-03 00:09:37.988719411 +0000 UTC m=+218.130548883" Dec 03 00:09:42 crc kubenswrapper[4811]: I1203 00:09:42.012545 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:09:42 crc kubenswrapper[4811]: I1203 00:09:42.012906 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:09:42 crc kubenswrapper[4811]: I1203 00:09:42.059507 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:09:43 crc kubenswrapper[4811]: I1203 00:09:43.002149 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:09:43 crc kubenswrapper[4811]: I1203 00:09:43.454693 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:09:43 crc kubenswrapper[4811]: I1203 00:09:43.454813 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:09:43 crc kubenswrapper[4811]: I1203 00:09:43.535503 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:09:43 crc kubenswrapper[4811]: I1203 00:09:43.913720 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:09:43 crc kubenswrapper[4811]: I1203 00:09:43.914832 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:09:43 crc kubenswrapper[4811]: I1203 00:09:43.982976 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:09:44 crc kubenswrapper[4811]: I1203 00:09:44.026068 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:09:44 crc kubenswrapper[4811]: I1203 00:09:44.429313 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6dpqh"] Dec 03 00:09:44 crc kubenswrapper[4811]: I1203 00:09:44.678800 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:09:44 crc kubenswrapper[4811]: I1203 00:09:44.678884 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:09:44 crc kubenswrapper[4811]: I1203 00:09:44.730081 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:09:44 crc kubenswrapper[4811]: I1203 00:09:44.972749 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6dpqh" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" containerName="registry-server" containerID="cri-o://eb89efa2882e5fb5a36e9b02af5ecf84ffeeea9a3096711ed6e90dc11dd5d0ad" gracePeriod=2 Dec 03 00:09:45 crc kubenswrapper[4811]: I1203 00:09:45.015496 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:09:45 crc kubenswrapper[4811]: I1203 00:09:45.022001 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:09:45 crc kubenswrapper[4811]: I1203 00:09:45.605395 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:09:45 crc kubenswrapper[4811]: I1203 00:09:45.651148 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:09:45 crc kubenswrapper[4811]: I1203 00:09:45.831252 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5kqq"] Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.391157 4811 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.393041 4811 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.393915 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710" gracePeriod=15 Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.393940 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05" gracePeriod=15 Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.393965 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d" gracePeriod=15 Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.393981 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3" gracePeriod=15 Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.394002 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9" gracePeriod=15 Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.394013 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.395315 4811 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 00:09:46 crc kubenswrapper[4811]: E1203 00:09:46.395861 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.396482 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 00:09:46 crc kubenswrapper[4811]: E1203 00:09:46.396520 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.396534 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 00:09:46 crc kubenswrapper[4811]: E1203 00:09:46.396559 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.396573 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 00:09:46 crc kubenswrapper[4811]: E1203 00:09:46.396592 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.396605 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 00:09:46 crc kubenswrapper[4811]: E1203 00:09:46.396626 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.396640 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 03 00:09:46 crc kubenswrapper[4811]: E1203 00:09:46.396672 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.396686 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.397322 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.397527 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.397699 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.397739 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.397897 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.397917 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 00:09:46 crc kubenswrapper[4811]: E1203 00:09:46.399138 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.399206 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.462353 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.495848 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.495928 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.496004 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.496066 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.496112 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.496146 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.496199 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.496309 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.598389 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.598724 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.598907 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.599067 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.598597 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.598947 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.598783 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.599144 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.599508 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.599684 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.599835 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.599895 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.600069 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.600154 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.600215 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.600558 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.745440 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.990914 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r5kqq" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" containerName="registry-server" containerID="cri-o://edab6a8d66b8d7e0a5487f294ccb839ed975c7f0db4e04fb4d445f2b1536f1d9" gracePeriod=2 Dec 03 00:09:46 crc kubenswrapper[4811]: I1203 00:09:46.991060 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3df41d5e809130790b3c388ca4c0d08fc3ee22b388b3ea5011a266a56dec8a5d"} Dec 03 00:09:47 crc kubenswrapper[4811]: I1203 00:09:47.024386 4811 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 03 00:09:48 crc kubenswrapper[4811]: I1203 00:09:48.236445 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6cr44"] Dec 03 00:09:48 crc kubenswrapper[4811]: I1203 00:09:48.237435 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6cr44" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" containerName="registry-server" containerID="cri-o://fb54e5e39bbc837997761d3330210eabd0e0c60f3160a5f9c3ae6418017524c2" gracePeriod=2 Dec 03 00:09:48 crc kubenswrapper[4811]: E1203 00:09:48.879483 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.129:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d8c0693ec41d6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:09:48.878528982 +0000 UTC m=+229.020358444,LastTimestamp:2025-12-03 00:09:48.878528982 +0000 UTC m=+229.020358444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.005699 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.007786 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.009194 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05" exitCode=0 Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.009242 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9" exitCode=0 Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.009272 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d" exitCode=0 Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.009284 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3" exitCode=2 Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.009364 4811 scope.go:117] "RemoveContainer" containerID="75ff78be63d434718eb766a55bcd09e4f9b9e3f9e8b443e1f2115c7637cd4240" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.011855 4811 generic.go:334] "Generic (PLEG): container finished" podID="d224a153-46f1-4689-96f6-98d13fc54aea" containerID="868a2d1058b62d97e9137c8500a3098271020df70310a5667b6ae3d9cd845f31" exitCode=0 Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.011920 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d224a153-46f1-4689-96f6-98d13fc54aea","Type":"ContainerDied","Data":"868a2d1058b62d97e9137c8500a3098271020df70310a5667b6ae3d9cd845f31"} Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.012810 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.015871 4811 generic.go:334] "Generic (PLEG): container finished" podID="a12a877a-9029-4eed-919f-6b21efa268ab" containerID="edab6a8d66b8d7e0a5487f294ccb839ed975c7f0db4e04fb4d445f2b1536f1d9" exitCode=0 Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.015952 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5kqq" event={"ID":"a12a877a-9029-4eed-919f-6b21efa268ab","Type":"ContainerDied","Data":"edab6a8d66b8d7e0a5487f294ccb839ed975c7f0db4e04fb4d445f2b1536f1d9"} Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.017401 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"659556b64b8b826de91e7be016f2a0d20c223effde364e6dfe708c5d902f5243"} Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.018530 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.018692 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.022540 4811 generic.go:334] "Generic (PLEG): container finished" podID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" containerID="eb89efa2882e5fb5a36e9b02af5ecf84ffeeea9a3096711ed6e90dc11dd5d0ad" exitCode=0 Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.022590 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dpqh" event={"ID":"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d","Type":"ContainerDied","Data":"eb89efa2882e5fb5a36e9b02af5ecf84ffeeea9a3096711ed6e90dc11dd5d0ad"} Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.027475 4811 generic.go:334] "Generic (PLEG): container finished" podID="7f133170-9779-4a12-86d0-43c6e9c16da8" containerID="fb54e5e39bbc837997761d3330210eabd0e0c60f3160a5f9c3ae6418017524c2" exitCode=0 Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.027524 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6cr44" event={"ID":"7f133170-9779-4a12-86d0-43c6e9c16da8","Type":"ContainerDied","Data":"fb54e5e39bbc837997761d3330210eabd0e0c60f3160a5f9c3ae6418017524c2"} Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.283739 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.285235 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.285766 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.286317 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.301336 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.302046 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.302438 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.303035 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.303437 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.348250 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-utilities\") pod \"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d\" (UID: \"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d\") " Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.348379 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-catalog-content\") pod \"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d\" (UID: \"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d\") " Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.348437 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7kpj\" (UniqueName: \"kubernetes.io/projected/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-kube-api-access-s7kpj\") pod \"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d\" (UID: \"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d\") " Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.348476 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f133170-9779-4a12-86d0-43c6e9c16da8-catalog-content\") pod \"7f133170-9779-4a12-86d0-43c6e9c16da8\" (UID: \"7f133170-9779-4a12-86d0-43c6e9c16da8\") " Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.348541 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptxkz\" (UniqueName: \"kubernetes.io/projected/7f133170-9779-4a12-86d0-43c6e9c16da8-kube-api-access-ptxkz\") pod \"7f133170-9779-4a12-86d0-43c6e9c16da8\" (UID: \"7f133170-9779-4a12-86d0-43c6e9c16da8\") " Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.348565 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f133170-9779-4a12-86d0-43c6e9c16da8-utilities\") pod \"7f133170-9779-4a12-86d0-43c6e9c16da8\" (UID: \"7f133170-9779-4a12-86d0-43c6e9c16da8\") " Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.349462 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-utilities" (OuterVolumeSpecName: "utilities") pod "08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" (UID: "08cf8c1a-2191-4e7c-bba4-2ecc51132d8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.349537 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f133170-9779-4a12-86d0-43c6e9c16da8-utilities" (OuterVolumeSpecName: "utilities") pod "7f133170-9779-4a12-86d0-43c6e9c16da8" (UID: "7f133170-9779-4a12-86d0-43c6e9c16da8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.356198 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-kube-api-access-s7kpj" (OuterVolumeSpecName: "kube-api-access-s7kpj") pod "08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" (UID: "08cf8c1a-2191-4e7c-bba4-2ecc51132d8d"). InnerVolumeSpecName "kube-api-access-s7kpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.358597 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f133170-9779-4a12-86d0-43c6e9c16da8-kube-api-access-ptxkz" (OuterVolumeSpecName: "kube-api-access-ptxkz") pod "7f133170-9779-4a12-86d0-43c6e9c16da8" (UID: "7f133170-9779-4a12-86d0-43c6e9c16da8"). InnerVolumeSpecName "kube-api-access-ptxkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.403614 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" (UID: "08cf8c1a-2191-4e7c-bba4-2ecc51132d8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.450530 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptxkz\" (UniqueName: \"kubernetes.io/projected/7f133170-9779-4a12-86d0-43c6e9c16da8-kube-api-access-ptxkz\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.450608 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f133170-9779-4a12-86d0-43c6e9c16da8-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.450619 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.450630 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.450639 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7kpj\" (UniqueName: \"kubernetes.io/projected/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d-kube-api-access-s7kpj\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.479248 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f133170-9779-4a12-86d0-43c6e9c16da8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f133170-9779-4a12-86d0-43c6e9c16da8" (UID: "7f133170-9779-4a12-86d0-43c6e9c16da8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.551741 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f133170-9779-4a12-86d0-43c6e9c16da8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.656722 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.657381 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.657573 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.657995 4811 status_manager.go:851] "Failed to get status for pod" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" pod="openshift-marketplace/redhat-marketplace-r5kqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5kqq\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.658558 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.658967 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.753798 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnld2\" (UniqueName: \"kubernetes.io/projected/a12a877a-9029-4eed-919f-6b21efa268ab-kube-api-access-lnld2\") pod \"a12a877a-9029-4eed-919f-6b21efa268ab\" (UID: \"a12a877a-9029-4eed-919f-6b21efa268ab\") " Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.753885 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12a877a-9029-4eed-919f-6b21efa268ab-utilities\") pod \"a12a877a-9029-4eed-919f-6b21efa268ab\" (UID: \"a12a877a-9029-4eed-919f-6b21efa268ab\") " Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.753918 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12a877a-9029-4eed-919f-6b21efa268ab-catalog-content\") pod \"a12a877a-9029-4eed-919f-6b21efa268ab\" (UID: \"a12a877a-9029-4eed-919f-6b21efa268ab\") " Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.754632 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a12a877a-9029-4eed-919f-6b21efa268ab-utilities" (OuterVolumeSpecName: "utilities") pod "a12a877a-9029-4eed-919f-6b21efa268ab" (UID: "a12a877a-9029-4eed-919f-6b21efa268ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.758727 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a12a877a-9029-4eed-919f-6b21efa268ab-kube-api-access-lnld2" (OuterVolumeSpecName: "kube-api-access-lnld2") pod "a12a877a-9029-4eed-919f-6b21efa268ab" (UID: "a12a877a-9029-4eed-919f-6b21efa268ab"). InnerVolumeSpecName "kube-api-access-lnld2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.772663 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a12a877a-9029-4eed-919f-6b21efa268ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a12a877a-9029-4eed-919f-6b21efa268ab" (UID: "a12a877a-9029-4eed-919f-6b21efa268ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.855964 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnld2\" (UniqueName: \"kubernetes.io/projected/a12a877a-9029-4eed-919f-6b21efa268ab-kube-api-access-lnld2\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.856017 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a12a877a-9029-4eed-919f-6b21efa268ab-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:49 crc kubenswrapper[4811]: I1203 00:09:49.856031 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a12a877a-9029-4eed-919f-6b21efa268ab-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.038956 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.042642 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5kqq" event={"ID":"a12a877a-9029-4eed-919f-6b21efa268ab","Type":"ContainerDied","Data":"1ebaa3aca8dff2fea5fd731408075036a7a6483fb05f730d84925f79dce510f8"} Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.042663 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5kqq" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.042703 4811 scope.go:117] "RemoveContainer" containerID="edab6a8d66b8d7e0a5487f294ccb839ed975c7f0db4e04fb4d445f2b1536f1d9" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.043700 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.044017 4811 status_manager.go:851] "Failed to get status for pod" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" pod="openshift-marketplace/redhat-marketplace-r5kqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5kqq\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.044612 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.044947 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6cr44" event={"ID":"7f133170-9779-4a12-86d0-43c6e9c16da8","Type":"ContainerDied","Data":"341fbd99ae456a0c26791535c8ff1d2ab002bdcdb19ba6df487e897ad591e2e2"} Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.044979 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6cr44" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.045059 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.045855 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.046689 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.047057 4811 status_manager.go:851] "Failed to get status for pod" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" pod="openshift-marketplace/redhat-marketplace-r5kqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5kqq\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.047584 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.048210 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.048454 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.049475 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dpqh" event={"ID":"08cf8c1a-2191-4e7c-bba4-2ecc51132d8d","Type":"ContainerDied","Data":"7f0eeab000c62a3af8fa84bfa36a22b67ed5d0cc5aaf50c504d8d5d0c92ccd2e"} Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.049527 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dpqh" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.050512 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.051028 4811 status_manager.go:851] "Failed to get status for pod" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" pod="openshift-marketplace/redhat-marketplace-r5kqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5kqq\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.051340 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.051648 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.051956 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.072514 4811 scope.go:117] "RemoveContainer" containerID="5fb00dd11182a7202d648861d20310bc990e905fe47ec2645018bbd89e655e84" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.074001 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.074528 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.076411 4811 status_manager.go:851] "Failed to get status for pod" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" pod="openshift-marketplace/redhat-marketplace-r5kqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5kqq\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.076955 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.077598 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.079183 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.079553 4811 status_manager.go:851] "Failed to get status for pod" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" pod="openshift-marketplace/redhat-marketplace-r5kqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5kqq\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.092369 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.092909 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.096411 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.096939 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.097410 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.098400 4811 status_manager.go:851] "Failed to get status for pod" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" pod="openshift-marketplace/redhat-marketplace-r5kqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5kqq\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.098743 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.099757 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.106429 4811 scope.go:117] "RemoveContainer" containerID="31a02a8322ae9e3965aa7ddf8d102ad46901ccb680ac1487d9504e963d2a833a" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.120380 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.120773 4811 status_manager.go:851] "Failed to get status for pod" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" pod="openshift-marketplace/redhat-marketplace-r5kqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5kqq\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.124829 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.124863 4811 scope.go:117] "RemoveContainer" containerID="fb54e5e39bbc837997761d3330210eabd0e0c60f3160a5f9c3ae6418017524c2" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.125234 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.125494 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.146363 4811 scope.go:117] "RemoveContainer" containerID="1fb3d8e556512b4d1d00ffd03bd37a5ec4925d0343fb1959af47520bcd25d5c4" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.174366 4811 scope.go:117] "RemoveContainer" containerID="57018271f291c13fe5814d677ecf11674ef4132337da1e9b629f3b039fc308d8" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.202703 4811 scope.go:117] "RemoveContainer" containerID="eb89efa2882e5fb5a36e9b02af5ecf84ffeeea9a3096711ed6e90dc11dd5d0ad" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.219891 4811 scope.go:117] "RemoveContainer" containerID="9030ca4144e100662d8ed57ff6a7606f13b6f72c4c59fe23e3cecb86a72d7b3e" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.240997 4811 scope.go:117] "RemoveContainer" containerID="78fd001f4096de433871e6b3450de1eb219e9b3c4f60247b222f7a056eed499e" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.253505 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.254121 4811 status_manager.go:851] "Failed to get status for pod" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" pod="openshift-marketplace/redhat-marketplace-r5kqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5kqq\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.254448 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.254764 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.254984 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.255219 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.362004 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d224a153-46f1-4689-96f6-98d13fc54aea-kubelet-dir\") pod \"d224a153-46f1-4689-96f6-98d13fc54aea\" (UID: \"d224a153-46f1-4689-96f6-98d13fc54aea\") " Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.362118 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d224a153-46f1-4689-96f6-98d13fc54aea-var-lock\") pod \"d224a153-46f1-4689-96f6-98d13fc54aea\" (UID: \"d224a153-46f1-4689-96f6-98d13fc54aea\") " Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.362198 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d224a153-46f1-4689-96f6-98d13fc54aea-kube-api-access\") pod \"d224a153-46f1-4689-96f6-98d13fc54aea\" (UID: \"d224a153-46f1-4689-96f6-98d13fc54aea\") " Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.363235 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d224a153-46f1-4689-96f6-98d13fc54aea-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d224a153-46f1-4689-96f6-98d13fc54aea" (UID: "d224a153-46f1-4689-96f6-98d13fc54aea"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.363300 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d224a153-46f1-4689-96f6-98d13fc54aea-var-lock" (OuterVolumeSpecName: "var-lock") pod "d224a153-46f1-4689-96f6-98d13fc54aea" (UID: "d224a153-46f1-4689-96f6-98d13fc54aea"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.367130 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d224a153-46f1-4689-96f6-98d13fc54aea-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d224a153-46f1-4689-96f6-98d13fc54aea" (UID: "d224a153-46f1-4689-96f6-98d13fc54aea"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.463680 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d224a153-46f1-4689-96f6-98d13fc54aea-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.463716 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d224a153-46f1-4689-96f6-98d13fc54aea-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:50 crc kubenswrapper[4811]: I1203 00:09:50.463728 4811 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d224a153-46f1-4689-96f6-98d13fc54aea-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.069491 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d224a153-46f1-4689-96f6-98d13fc54aea","Type":"ContainerDied","Data":"d2175f0b59676e9ca5592775a2070a4a0fb3aacca53e9999cd34ac4030ce26fc"} Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.070513 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2175f0b59676e9ca5592775a2070a4a0fb3aacca53e9999cd34ac4030ce26fc" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.071165 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.086737 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.087609 4811 status_manager.go:851] "Failed to get status for pod" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" pod="openshift-marketplace/redhat-marketplace-r5kqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5kqq\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.088292 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.088708 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.089077 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.583598 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.584348 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.584816 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.585097 4811 status_manager.go:851] "Failed to get status for pod" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" pod="openshift-marketplace/redhat-marketplace-r5kqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5kqq\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.585312 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.585507 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.585650 4811 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.585831 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.686722 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.686800 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.686852 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.687231 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.687281 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.687302 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.788190 4811 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.788574 4811 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:51 crc kubenswrapper[4811]: I1203 00:09:51.788585 4811 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.081786 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.082743 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710" exitCode=0 Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.082841 4811 scope.go:117] "RemoveContainer" containerID="cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.082926 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.105542 4811 scope.go:117] "RemoveContainer" containerID="78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.109104 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.109454 4811 status_manager.go:851] "Failed to get status for pod" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" pod="openshift-marketplace/redhat-marketplace-r5kqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5kqq\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.109818 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.110150 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.110444 4811 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.110696 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.121221 4811 scope.go:117] "RemoveContainer" containerID="2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.127618 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.139290 4811 scope.go:117] "RemoveContainer" containerID="29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.155945 4811 scope.go:117] "RemoveContainer" containerID="83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.183601 4811 scope.go:117] "RemoveContainer" containerID="a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.203961 4811 scope.go:117] "RemoveContainer" containerID="cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05" Dec 03 00:09:52 crc kubenswrapper[4811]: E1203 00:09:52.204444 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\": container with ID starting with cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05 not found: ID does not exist" containerID="cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.204611 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05"} err="failed to get container status \"cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\": rpc error: code = NotFound desc = could not find container \"cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05\": container with ID starting with cc016b8c7072f81cf0e694dea2353e9c01b056e27b5e66cd88efbbd674011c05 not found: ID does not exist" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.204642 4811 scope.go:117] "RemoveContainer" containerID="78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9" Dec 03 00:09:52 crc kubenswrapper[4811]: E1203 00:09:52.204914 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\": container with ID starting with 78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9 not found: ID does not exist" containerID="78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.204966 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9"} err="failed to get container status \"78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\": rpc error: code = NotFound desc = could not find container \"78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9\": container with ID starting with 78beedcf33b04db071551c24046f2614a8f417b0def93a4525ee538cc9d219a9 not found: ID does not exist" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.205001 4811 scope.go:117] "RemoveContainer" containerID="2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d" Dec 03 00:09:52 crc kubenswrapper[4811]: E1203 00:09:52.206529 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\": container with ID starting with 2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d not found: ID does not exist" containerID="2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.206581 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d"} err="failed to get container status \"2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\": rpc error: code = NotFound desc = could not find container \"2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d\": container with ID starting with 2f2e0bc23eb3d6c9e311fdcc490590fe885517816cdfd69a513f785800889a4d not found: ID does not exist" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.206613 4811 scope.go:117] "RemoveContainer" containerID="29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3" Dec 03 00:09:52 crc kubenswrapper[4811]: E1203 00:09:52.207206 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\": container with ID starting with 29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3 not found: ID does not exist" containerID="29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.207242 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3"} err="failed to get container status \"29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\": rpc error: code = NotFound desc = could not find container \"29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3\": container with ID starting with 29f7a4877a5e502089d0fbb4db1e697fec66283ca39e8530e4d34d4808e540c3 not found: ID does not exist" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.207279 4811 scope.go:117] "RemoveContainer" containerID="83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710" Dec 03 00:09:52 crc kubenswrapper[4811]: E1203 00:09:52.209191 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\": container with ID starting with 83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710 not found: ID does not exist" containerID="83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.209228 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710"} err="failed to get container status \"83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\": rpc error: code = NotFound desc = could not find container \"83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710\": container with ID starting with 83e2f12e9179e5f1bcb52e7fb553bc966f0d4dac17a6e55f61f59bcb036ab710 not found: ID does not exist" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.209249 4811 scope.go:117] "RemoveContainer" containerID="a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390" Dec 03 00:09:52 crc kubenswrapper[4811]: E1203 00:09:52.209821 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\": container with ID starting with a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390 not found: ID does not exist" containerID="a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390" Dec 03 00:09:52 crc kubenswrapper[4811]: I1203 00:09:52.209859 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390"} err="failed to get container status \"a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\": rpc error: code = NotFound desc = could not find container \"a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390\": container with ID starting with a4602a3103ac5b654b617e3d7a5d80ae7ce183b455c1c2bd98c1fd5365917390 not found: ID does not exist" Dec 03 00:09:54 crc kubenswrapper[4811]: E1203 00:09:54.353843 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.129:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d8c0693ec41d6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:09:48.878528982 +0000 UTC m=+229.020358444,LastTimestamp:2025-12-03 00:09:48.878528982 +0000 UTC m=+229.020358444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:09:55 crc kubenswrapper[4811]: E1203 00:09:55.594987 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:55 crc kubenswrapper[4811]: E1203 00:09:55.595724 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:55 crc kubenswrapper[4811]: E1203 00:09:55.596179 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:55 crc kubenswrapper[4811]: E1203 00:09:55.597420 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:55 crc kubenswrapper[4811]: E1203 00:09:55.598244 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:55 crc kubenswrapper[4811]: I1203 00:09:55.598328 4811 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 03 00:09:55 crc kubenswrapper[4811]: E1203 00:09:55.598796 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="200ms" Dec 03 00:09:55 crc kubenswrapper[4811]: E1203 00:09:55.800134 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="400ms" Dec 03 00:09:56 crc kubenswrapper[4811]: E1203 00:09:56.200801 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="800ms" Dec 03 00:09:57 crc kubenswrapper[4811]: E1203 00:09:57.001868 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="1.6s" Dec 03 00:09:58 crc kubenswrapper[4811]: E1203 00:09:58.603247 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.129:6443: connect: connection refused" interval="3.2s" Dec 03 00:09:59 crc kubenswrapper[4811]: I1203 00:09:59.114226 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:09:59 crc kubenswrapper[4811]: I1203 00:09:59.115552 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:59 crc kubenswrapper[4811]: I1203 00:09:59.116154 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:59 crc kubenswrapper[4811]: I1203 00:09:59.116726 4811 status_manager.go:851] "Failed to get status for pod" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" pod="openshift-marketplace/redhat-marketplace-r5kqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5kqq\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:59 crc kubenswrapper[4811]: I1203 00:09:59.117353 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:59 crc kubenswrapper[4811]: I1203 00:09:59.117749 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:09:59 crc kubenswrapper[4811]: I1203 00:09:59.128477 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7278dba7-5e62-413c-b7b9-3d5133ebc173" Dec 03 00:09:59 crc kubenswrapper[4811]: I1203 00:09:59.128514 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7278dba7-5e62-413c-b7b9-3d5133ebc173" Dec 03 00:09:59 crc kubenswrapper[4811]: E1203 00:09:59.128885 4811 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:09:59 crc kubenswrapper[4811]: I1203 00:09:59.129398 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:09:59 crc kubenswrapper[4811]: W1203 00:09:59.152211 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-3fa6652ef132bce92ad223711eb911fccb2bee96aec9b09ba822dbe631a6fc2c WatchSource:0}: Error finding container 3fa6652ef132bce92ad223711eb911fccb2bee96aec9b09ba822dbe631a6fc2c: Status 404 returned error can't find the container with id 3fa6652ef132bce92ad223711eb911fccb2bee96aec9b09ba822dbe631a6fc2c Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.122303 4811 status_manager.go:851] "Failed to get status for pod" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" pod="openshift-marketplace/redhat-marketplace-r5kqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5kqq\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.122939 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.123127 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.123486 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.123723 4811 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.124013 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.149514 4811 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="659368a884a4ea17443480205fd1b31a271b6b47bcf45fe0f1bb47d7083d0ddb" exitCode=0 Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.149557 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"659368a884a4ea17443480205fd1b31a271b6b47bcf45fe0f1bb47d7083d0ddb"} Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.149586 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3fa6652ef132bce92ad223711eb911fccb2bee96aec9b09ba822dbe631a6fc2c"} Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.149849 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7278dba7-5e62-413c-b7b9-3d5133ebc173" Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.149861 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7278dba7-5e62-413c-b7b9-3d5133ebc173" Dec 03 00:10:00 crc kubenswrapper[4811]: E1203 00:10:00.150187 4811 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.150442 4811 status_manager.go:851] "Failed to get status for pod" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.150821 4811 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.151355 4811 status_manager.go:851] "Failed to get status for pod" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" pod="openshift-marketplace/redhat-operators-6cr44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6cr44\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.151707 4811 status_manager.go:851] "Failed to get status for pod" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" pod="openshift-marketplace/redhat-marketplace-r5kqq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r5kqq\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.152193 4811 status_manager.go:851] "Failed to get status for pod" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" pod="openshift-marketplace/community-operators-6dpqh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6dpqh\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:10:00 crc kubenswrapper[4811]: I1203 00:10:00.152515 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.129:6443: connect: connection refused" Dec 03 00:10:01 crc kubenswrapper[4811]: I1203 00:10:01.171238 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d948f3af6df0d6b248d774e4745bc5c521468e0353da63585958b6aff19dbe7f"} Dec 03 00:10:01 crc kubenswrapper[4811]: I1203 00:10:01.171548 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"27bdcaff3bf485380ad942d29e7905261c8d51a49bac35e747463f5b4d15bd54"} Dec 03 00:10:01 crc kubenswrapper[4811]: I1203 00:10:01.171558 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5d18a8937e7e523df16b9c60d10ea4375ea99ac3d89c5d05e7cafe04b7791ae3"} Dec 03 00:10:01 crc kubenswrapper[4811]: I1203 00:10:01.171566 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b6a57d31640e0bac5dd92808fb87e95c070caf736fda735a0a36cce78abff64c"} Dec 03 00:10:02 crc kubenswrapper[4811]: I1203 00:10:02.181860 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5f69739a047f547db937dd580558c4b1b47ccd6c64ccf010086de59ee487320d"} Dec 03 00:10:02 crc kubenswrapper[4811]: I1203 00:10:02.182054 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:02 crc kubenswrapper[4811]: I1203 00:10:02.182281 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7278dba7-5e62-413c-b7b9-3d5133ebc173" Dec 03 00:10:02 crc kubenswrapper[4811]: I1203 00:10:02.182310 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7278dba7-5e62-413c-b7b9-3d5133ebc173" Dec 03 00:10:04 crc kubenswrapper[4811]: I1203 00:10:04.212748 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:04 crc kubenswrapper[4811]: I1203 00:10:04.213223 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:04 crc kubenswrapper[4811]: I1203 00:10:04.220208 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:06 crc kubenswrapper[4811]: I1203 00:10:06.233106 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 00:10:06 crc kubenswrapper[4811]: I1203 00:10:06.233675 4811 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50" exitCode=1 Dec 03 00:10:06 crc kubenswrapper[4811]: I1203 00:10:06.233712 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50"} Dec 03 00:10:06 crc kubenswrapper[4811]: I1203 00:10:06.234225 4811 scope.go:117] "RemoveContainer" containerID="a5b7c735a38b0c835c6e4ebc334275387dd201b710d1dc16552cdfa674eb5f50" Dec 03 00:10:06 crc kubenswrapper[4811]: I1203 00:10:06.668524 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:10:07 crc kubenswrapper[4811]: I1203 00:10:07.190838 4811 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:07 crc kubenswrapper[4811]: I1203 00:10:07.244467 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 00:10:07 crc kubenswrapper[4811]: I1203 00:10:07.244563 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c82982ad3b0929218e84163391110f47f6cf6fa6aca07a8ec65e482f5dbfc700"} Dec 03 00:10:07 crc kubenswrapper[4811]: I1203 00:10:07.244862 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7278dba7-5e62-413c-b7b9-3d5133ebc173" Dec 03 00:10:07 crc kubenswrapper[4811]: I1203 00:10:07.244879 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7278dba7-5e62-413c-b7b9-3d5133ebc173" Dec 03 00:10:07 crc kubenswrapper[4811]: I1203 00:10:07.248681 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:07 crc kubenswrapper[4811]: I1203 00:10:07.328559 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:10:07 crc kubenswrapper[4811]: I1203 00:10:07.333229 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:10:07 crc kubenswrapper[4811]: I1203 00:10:07.334882 4811 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="3d3bff30-5fae-435f-b9fb-a25965017c30" Dec 03 00:10:07 crc kubenswrapper[4811]: I1203 00:10:07.968107 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:10:08 crc kubenswrapper[4811]: I1203 00:10:08.251766 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7278dba7-5e62-413c-b7b9-3d5133ebc173" Dec 03 00:10:08 crc kubenswrapper[4811]: I1203 00:10:08.251831 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7278dba7-5e62-413c-b7b9-3d5133ebc173" Dec 03 00:10:08 crc kubenswrapper[4811]: I1203 00:10:08.255832 4811 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="3d3bff30-5fae-435f-b9fb-a25965017c30" Dec 03 00:10:13 crc kubenswrapper[4811]: I1203 00:10:13.535083 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 00:10:13 crc kubenswrapper[4811]: I1203 00:10:13.791180 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 03 00:10:14 crc kubenswrapper[4811]: I1203 00:10:14.183183 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 00:10:14 crc kubenswrapper[4811]: I1203 00:10:14.491841 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 03 00:10:15 crc kubenswrapper[4811]: I1203 00:10:15.533938 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 00:10:15 crc kubenswrapper[4811]: I1203 00:10:15.547179 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 00:10:16 crc kubenswrapper[4811]: I1203 00:10:16.529081 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 00:10:16 crc kubenswrapper[4811]: I1203 00:10:16.887196 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 00:10:17 crc kubenswrapper[4811]: I1203 00:10:17.447426 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 00:10:17 crc kubenswrapper[4811]: I1203 00:10:17.635554 4811 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 00:10:17 crc kubenswrapper[4811]: I1203 00:10:17.806080 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 00:10:17 crc kubenswrapper[4811]: I1203 00:10:17.882237 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 00:10:17 crc kubenswrapper[4811]: I1203 00:10:17.973010 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:10:18 crc kubenswrapper[4811]: I1203 00:10:18.234367 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 00:10:18 crc kubenswrapper[4811]: I1203 00:10:18.550652 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 00:10:18 crc kubenswrapper[4811]: I1203 00:10:18.561096 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 00:10:18 crc kubenswrapper[4811]: I1203 00:10:18.581735 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 03 00:10:18 crc kubenswrapper[4811]: I1203 00:10:18.746336 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 00:10:18 crc kubenswrapper[4811]: I1203 00:10:18.787546 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 00:10:18 crc kubenswrapper[4811]: I1203 00:10:18.838765 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 00:10:18 crc kubenswrapper[4811]: I1203 00:10:18.918941 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 00:10:18 crc kubenswrapper[4811]: I1203 00:10:18.962402 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 00:10:19 crc kubenswrapper[4811]: I1203 00:10:19.137933 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 00:10:19 crc kubenswrapper[4811]: I1203 00:10:19.217348 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 00:10:19 crc kubenswrapper[4811]: I1203 00:10:19.221168 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 00:10:19 crc kubenswrapper[4811]: I1203 00:10:19.410609 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 00:10:19 crc kubenswrapper[4811]: I1203 00:10:19.449414 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 00:10:19 crc kubenswrapper[4811]: I1203 00:10:19.652873 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 00:10:19 crc kubenswrapper[4811]: I1203 00:10:19.748040 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 03 00:10:19 crc kubenswrapper[4811]: I1203 00:10:19.755816 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 00:10:19 crc kubenswrapper[4811]: I1203 00:10:19.786875 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 00:10:20 crc kubenswrapper[4811]: I1203 00:10:20.586872 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 00:10:20 crc kubenswrapper[4811]: I1203 00:10:20.670741 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 03 00:10:20 crc kubenswrapper[4811]: I1203 00:10:20.832019 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 00:10:20 crc kubenswrapper[4811]: I1203 00:10:20.876459 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 00:10:20 crc kubenswrapper[4811]: I1203 00:10:20.926116 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 00:10:21 crc kubenswrapper[4811]: I1203 00:10:21.028952 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 00:10:21 crc kubenswrapper[4811]: I1203 00:10:21.041682 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 00:10:21 crc kubenswrapper[4811]: I1203 00:10:21.071115 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 00:10:21 crc kubenswrapper[4811]: I1203 00:10:21.082987 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 00:10:21 crc kubenswrapper[4811]: I1203 00:10:21.128840 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 03 00:10:21 crc kubenswrapper[4811]: I1203 00:10:21.706037 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 00:10:21 crc kubenswrapper[4811]: I1203 00:10:21.825516 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 03 00:10:21 crc kubenswrapper[4811]: I1203 00:10:21.841745 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 00:10:21 crc kubenswrapper[4811]: I1203 00:10:21.886644 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 00:10:21 crc kubenswrapper[4811]: I1203 00:10:21.963151 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 00:10:22 crc kubenswrapper[4811]: I1203 00:10:22.103235 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 00:10:22 crc kubenswrapper[4811]: I1203 00:10:22.163328 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 00:10:22 crc kubenswrapper[4811]: I1203 00:10:22.237124 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 00:10:22 crc kubenswrapper[4811]: I1203 00:10:22.299426 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 03 00:10:22 crc kubenswrapper[4811]: I1203 00:10:22.340771 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 03 00:10:22 crc kubenswrapper[4811]: I1203 00:10:22.363021 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 00:10:22 crc kubenswrapper[4811]: I1203 00:10:22.392907 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 00:10:22 crc kubenswrapper[4811]: I1203 00:10:22.406031 4811 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 00:10:22 crc kubenswrapper[4811]: I1203 00:10:22.436377 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 03 00:10:22 crc kubenswrapper[4811]: I1203 00:10:22.689302 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 00:10:22 crc kubenswrapper[4811]: I1203 00:10:22.691623 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 00:10:22 crc kubenswrapper[4811]: I1203 00:10:22.765401 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 00:10:22 crc kubenswrapper[4811]: I1203 00:10:22.951337 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.153374 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.197796 4811 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.200354 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=37.200338745 podStartE2EDuration="37.200338745s" podCreationTimestamp="2025-12-03 00:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:10:07.300048241 +0000 UTC m=+247.441877723" watchObservedRunningTime="2025-12-03 00:10:23.200338745 +0000 UTC m=+263.342168217" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.202170 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6dpqh","openshift-marketplace/redhat-operators-6cr44","openshift-marketplace/redhat-marketplace-r5kqq","openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.202228 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.208904 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.225880 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.225859569 podStartE2EDuration="16.225859569s" podCreationTimestamp="2025-12-03 00:10:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:10:23.220573899 +0000 UTC m=+263.362403391" watchObservedRunningTime="2025-12-03 00:10:23.225859569 +0000 UTC m=+263.367689051" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.233067 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.275658 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.305050 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.357540 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.385153 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.492102 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.499452 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.509219 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.591039 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.687030 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.690054 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.701489 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.874928 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.899691 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 00:10:23 crc kubenswrapper[4811]: I1203 00:10:23.906526 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.034910 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.123127 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" path="/var/lib/kubelet/pods/08cf8c1a-2191-4e7c-bba4-2ecc51132d8d/volumes" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.123831 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" path="/var/lib/kubelet/pods/7f133170-9779-4a12-86d0-43c6e9c16da8/volumes" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.124502 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" path="/var/lib/kubelet/pods/a12a877a-9029-4eed-919f-6b21efa268ab/volumes" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.180432 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.206091 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.244801 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.328039 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.418189 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.466290 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.712568 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.752951 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.766285 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.788147 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.813346 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.836624 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.895506 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.898346 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.913685 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 00:10:24 crc kubenswrapper[4811]: I1203 00:10:24.928908 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.005145 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.156874 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.211203 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.225026 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.259387 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.346733 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.371664 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.471098 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.509289 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.599273 4811 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.781535 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.796590 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.796654 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.804069 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.839789 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.846717 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.907298 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.967034 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.969181 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 00:10:25 crc kubenswrapper[4811]: I1203 00:10:25.989485 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 00:10:26 crc kubenswrapper[4811]: I1203 00:10:26.012913 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 00:10:26 crc kubenswrapper[4811]: I1203 00:10:26.028722 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 00:10:26 crc kubenswrapper[4811]: I1203 00:10:26.061405 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 00:10:26 crc kubenswrapper[4811]: I1203 00:10:26.133803 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 03 00:10:26 crc kubenswrapper[4811]: I1203 00:10:26.233315 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 00:10:26 crc kubenswrapper[4811]: I1203 00:10:26.238585 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 00:10:26 crc kubenswrapper[4811]: I1203 00:10:26.285581 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 00:10:26 crc kubenswrapper[4811]: I1203 00:10:26.615884 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 00:10:26 crc kubenswrapper[4811]: I1203 00:10:26.772900 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 00:10:26 crc kubenswrapper[4811]: I1203 00:10:26.838581 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 00:10:26 crc kubenswrapper[4811]: I1203 00:10:26.855861 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 03 00:10:27 crc kubenswrapper[4811]: I1203 00:10:27.048373 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 00:10:27 crc kubenswrapper[4811]: I1203 00:10:27.072633 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 00:10:27 crc kubenswrapper[4811]: I1203 00:10:27.303072 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 00:10:27 crc kubenswrapper[4811]: I1203 00:10:27.315935 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 03 00:10:27 crc kubenswrapper[4811]: I1203 00:10:27.454120 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 00:10:27 crc kubenswrapper[4811]: I1203 00:10:27.535766 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 00:10:27 crc kubenswrapper[4811]: I1203 00:10:27.608463 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 00:10:27 crc kubenswrapper[4811]: I1203 00:10:27.619687 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 03 00:10:27 crc kubenswrapper[4811]: I1203 00:10:27.786406 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 00:10:27 crc kubenswrapper[4811]: I1203 00:10:27.796904 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 00:10:27 crc kubenswrapper[4811]: I1203 00:10:27.999583 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.026290 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.151241 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.161187 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.167932 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.242298 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.256935 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.257524 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.313364 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.332152 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.337155 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.383163 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.403846 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.415818 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.468095 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.480172 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.485994 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.488280 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.565499 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.632832 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.650501 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.691490 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.830754 4811 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.852099 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 00:10:28 crc kubenswrapper[4811]: I1203 00:10:28.936030 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.010169 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.019815 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.055748 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.060783 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.303173 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.330175 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.402719 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.403494 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.490376 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.506529 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.554503 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.567232 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.657543 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.671490 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.778298 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.825996 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.852778 4811 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 00:10:29 crc kubenswrapper[4811]: I1203 00:10:29.853161 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://659556b64b8b826de91e7be016f2a0d20c223effde364e6dfe708c5d902f5243" gracePeriod=5 Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.098910 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.119630 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.211360 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.339863 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.463646 4811 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.490732 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.508870 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.580397 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.581991 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.637187 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.660530 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.689701 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.742253 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.784896 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.833595 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.872942 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.885800 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 00:10:30 crc kubenswrapper[4811]: I1203 00:10:30.972722 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 00:10:31 crc kubenswrapper[4811]: I1203 00:10:31.006681 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 00:10:31 crc kubenswrapper[4811]: I1203 00:10:31.028138 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 00:10:31 crc kubenswrapper[4811]: I1203 00:10:31.144672 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 00:10:31 crc kubenswrapper[4811]: I1203 00:10:31.245688 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 03 00:10:31 crc kubenswrapper[4811]: I1203 00:10:31.301472 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 00:10:31 crc kubenswrapper[4811]: I1203 00:10:31.485063 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 00:10:31 crc kubenswrapper[4811]: I1203 00:10:31.545759 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 00:10:31 crc kubenswrapper[4811]: I1203 00:10:31.563072 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 00:10:31 crc kubenswrapper[4811]: I1203 00:10:31.771855 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 03 00:10:31 crc kubenswrapper[4811]: I1203 00:10:31.781576 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 00:10:31 crc kubenswrapper[4811]: I1203 00:10:31.787599 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 00:10:31 crc kubenswrapper[4811]: I1203 00:10:31.816124 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 00:10:31 crc kubenswrapper[4811]: I1203 00:10:31.914089 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 00:10:31 crc kubenswrapper[4811]: I1203 00:10:31.924425 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 03 00:10:31 crc kubenswrapper[4811]: I1203 00:10:31.946536 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 00:10:32 crc kubenswrapper[4811]: I1203 00:10:32.080533 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 00:10:32 crc kubenswrapper[4811]: I1203 00:10:32.080738 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 00:10:32 crc kubenswrapper[4811]: I1203 00:10:32.154301 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 00:10:32 crc kubenswrapper[4811]: I1203 00:10:32.210724 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 03 00:10:32 crc kubenswrapper[4811]: I1203 00:10:32.324920 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 00:10:32 crc kubenswrapper[4811]: I1203 00:10:32.437468 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 00:10:32 crc kubenswrapper[4811]: I1203 00:10:32.452368 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 00:10:32 crc kubenswrapper[4811]: I1203 00:10:32.499650 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 03 00:10:32 crc kubenswrapper[4811]: I1203 00:10:32.510841 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 00:10:32 crc kubenswrapper[4811]: I1203 00:10:32.544502 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 00:10:32 crc kubenswrapper[4811]: I1203 00:10:32.652181 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 00:10:32 crc kubenswrapper[4811]: I1203 00:10:32.849691 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 00:10:32 crc kubenswrapper[4811]: I1203 00:10:32.950807 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 03 00:10:32 crc kubenswrapper[4811]: I1203 00:10:32.957183 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 00:10:33 crc kubenswrapper[4811]: I1203 00:10:33.047171 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 00:10:33 crc kubenswrapper[4811]: I1203 00:10:33.070587 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 00:10:33 crc kubenswrapper[4811]: I1203 00:10:33.382757 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 00:10:33 crc kubenswrapper[4811]: I1203 00:10:33.397713 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 00:10:33 crc kubenswrapper[4811]: I1203 00:10:33.414957 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 00:10:33 crc kubenswrapper[4811]: I1203 00:10:33.429662 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 00:10:33 crc kubenswrapper[4811]: I1203 00:10:33.647186 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 00:10:33 crc kubenswrapper[4811]: I1203 00:10:33.662770 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 03 00:10:33 crc kubenswrapper[4811]: I1203 00:10:33.696146 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 03 00:10:33 crc kubenswrapper[4811]: I1203 00:10:33.722630 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 00:10:33 crc kubenswrapper[4811]: I1203 00:10:33.946529 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 03 00:10:33 crc kubenswrapper[4811]: I1203 00:10:33.952444 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 00:10:34 crc kubenswrapper[4811]: I1203 00:10:34.179500 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 00:10:34 crc kubenswrapper[4811]: I1203 00:10:34.335463 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 03 00:10:34 crc kubenswrapper[4811]: I1203 00:10:34.349235 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 00:10:34 crc kubenswrapper[4811]: I1203 00:10:34.389067 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 00:10:34 crc kubenswrapper[4811]: I1203 00:10:34.441788 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 00:10:34 crc kubenswrapper[4811]: I1203 00:10:34.496340 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 00:10:34 crc kubenswrapper[4811]: I1203 00:10:34.547909 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 03 00:10:34 crc kubenswrapper[4811]: I1203 00:10:34.668627 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.415701 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.424549 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.428366 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.428430 4811 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="659556b64b8b826de91e7be016f2a0d20c223effde364e6dfe708c5d902f5243" exitCode=137 Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.559867 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.559978 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.606627 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.625014 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.743945 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.744012 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.744078 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.744109 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.744192 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.744232 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.744563 4811 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.744651 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.744693 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.744719 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.753865 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.846170 4811 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.846579 4811 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.846649 4811 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:35 crc kubenswrapper[4811]: I1203 00:10:35.846721 4811 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:36 crc kubenswrapper[4811]: I1203 00:10:36.122909 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 03 00:10:36 crc kubenswrapper[4811]: I1203 00:10:36.123896 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 03 00:10:36 crc kubenswrapper[4811]: I1203 00:10:36.137806 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 00:10:36 crc kubenswrapper[4811]: I1203 00:10:36.137875 4811 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c69b8e84-34db-444a-bccd-8bdf8c7436e3" Dec 03 00:10:36 crc kubenswrapper[4811]: I1203 00:10:36.144619 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 00:10:36 crc kubenswrapper[4811]: I1203 00:10:36.144670 4811 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c69b8e84-34db-444a-bccd-8bdf8c7436e3" Dec 03 00:10:36 crc kubenswrapper[4811]: I1203 00:10:36.435146 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 00:10:36 crc kubenswrapper[4811]: I1203 00:10:36.435244 4811 scope.go:117] "RemoveContainer" containerID="659556b64b8b826de91e7be016f2a0d20c223effde364e6dfe708c5d902f5243" Dec 03 00:10:36 crc kubenswrapper[4811]: I1203 00:10:36.435321 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:36 crc kubenswrapper[4811]: I1203 00:10:36.437573 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 00:10:36 crc kubenswrapper[4811]: I1203 00:10:36.555248 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 00:10:36 crc kubenswrapper[4811]: I1203 00:10:36.556150 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 00:10:36 crc kubenswrapper[4811]: I1203 00:10:36.597796 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 00:10:36 crc kubenswrapper[4811]: I1203 00:10:36.650676 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 00:10:36 crc kubenswrapper[4811]: I1203 00:10:36.666821 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 00:10:37 crc kubenswrapper[4811]: I1203 00:10:37.041453 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.306502 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfsfv"] Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.307393 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sfsfv" podUID="e02c207e-d2f6-4c42-8e80-8967413395c0" containerName="registry-server" containerID="cri-o://1e19cdf1de3461f8084eb4a7eff8624c5d40ecf2b256cb99d0aa0df212bdeb04" gracePeriod=30 Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.321826 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ms8z7"] Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.322230 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ms8z7" podUID="43747cdd-50ef-43df-b98d-a4d855984bb3" containerName="registry-server" containerID="cri-o://b7fe22ee668a14a52302ca06ae4348d8ce6079f5efc96d1abc135ea03322d3cc" gracePeriod=30 Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.339845 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-956mn"] Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.340970 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-956mn" podUID="48ef7152-2ec1-4cfa-b2ab-88ff2fb42401" containerName="marketplace-operator" containerID="cri-o://a1d2bc9c58af5f510b3f2b3ba17cc69195216f7ea8b34e87518c5221514d44dd" gracePeriod=30 Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.348083 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj4zn"] Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.348411 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xj4zn" podUID="3f0c5586-e964-4734-a361-bcc6d34dfc8b" containerName="registry-server" containerID="cri-o://b5d327540158dfbbe80be66ef25b17a96b63276bb2b7f122e48e283d43d60c07" gracePeriod=30 Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.361504 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bqlcb"] Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.361953 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bqlcb" podUID="41ca1166-555e-4be2-b998-59bad45528df" containerName="registry-server" containerID="cri-o://aa7f4d80e744bc98654863977f7c091a969f9f2ac95f63d12f0580072e19ce9d" gracePeriod=30 Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.379196 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tx7sj"] Dec 03 00:10:45 crc kubenswrapper[4811]: E1203 00:10:45.379813 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" containerName="extract-utilities" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.379836 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" containerName="extract-utilities" Dec 03 00:10:45 crc kubenswrapper[4811]: E1203 00:10:45.379853 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.379860 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 00:10:45 crc kubenswrapper[4811]: E1203 00:10:45.379872 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" containerName="extract-content" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.379878 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" containerName="extract-content" Dec 03 00:10:45 crc kubenswrapper[4811]: E1203 00:10:45.379885 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" containerName="registry-server" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.379892 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" containerName="registry-server" Dec 03 00:10:45 crc kubenswrapper[4811]: E1203 00:10:45.379900 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" containerName="extract-content" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.379908 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" containerName="extract-content" Dec 03 00:10:45 crc kubenswrapper[4811]: E1203 00:10:45.379919 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" containerName="installer" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.379925 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" containerName="installer" Dec 03 00:10:45 crc kubenswrapper[4811]: E1203 00:10:45.379933 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" containerName="registry-server" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.379939 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" containerName="registry-server" Dec 03 00:10:45 crc kubenswrapper[4811]: E1203 00:10:45.379950 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" containerName="registry-server" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.379956 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" containerName="registry-server" Dec 03 00:10:45 crc kubenswrapper[4811]: E1203 00:10:45.379966 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" containerName="extract-content" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.379972 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" containerName="extract-content" Dec 03 00:10:45 crc kubenswrapper[4811]: E1203 00:10:45.379982 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" containerName="extract-utilities" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.379988 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" containerName="extract-utilities" Dec 03 00:10:45 crc kubenswrapper[4811]: E1203 00:10:45.379996 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" containerName="extract-utilities" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.380003 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" containerName="extract-utilities" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.380127 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f133170-9779-4a12-86d0-43c6e9c16da8" containerName="registry-server" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.380149 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cf8c1a-2191-4e7c-bba4-2ecc51132d8d" containerName="registry-server" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.380157 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a12a877a-9029-4eed-919f-6b21efa268ab" containerName="registry-server" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.380166 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.380179 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d224a153-46f1-4689-96f6-98d13fc54aea" containerName="installer" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.381171 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.388755 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tx7sj"] Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.485947 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cpg8\" (UniqueName: \"kubernetes.io/projected/0f19baaf-b832-4364-89bc-99ad74e0aae1-kube-api-access-9cpg8\") pod \"marketplace-operator-79b997595-tx7sj\" (UID: \"0f19baaf-b832-4364-89bc-99ad74e0aae1\") " pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.486024 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f19baaf-b832-4364-89bc-99ad74e0aae1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tx7sj\" (UID: \"0f19baaf-b832-4364-89bc-99ad74e0aae1\") " pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.486059 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f19baaf-b832-4364-89bc-99ad74e0aae1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tx7sj\" (UID: \"0f19baaf-b832-4364-89bc-99ad74e0aae1\") " pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.587289 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cpg8\" (UniqueName: \"kubernetes.io/projected/0f19baaf-b832-4364-89bc-99ad74e0aae1-kube-api-access-9cpg8\") pod \"marketplace-operator-79b997595-tx7sj\" (UID: \"0f19baaf-b832-4364-89bc-99ad74e0aae1\") " pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.587349 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f19baaf-b832-4364-89bc-99ad74e0aae1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tx7sj\" (UID: \"0f19baaf-b832-4364-89bc-99ad74e0aae1\") " pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.587381 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f19baaf-b832-4364-89bc-99ad74e0aae1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tx7sj\" (UID: \"0f19baaf-b832-4364-89bc-99ad74e0aae1\") " pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.588879 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f19baaf-b832-4364-89bc-99ad74e0aae1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tx7sj\" (UID: \"0f19baaf-b832-4364-89bc-99ad74e0aae1\") " pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.596303 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f19baaf-b832-4364-89bc-99ad74e0aae1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tx7sj\" (UID: \"0f19baaf-b832-4364-89bc-99ad74e0aae1\") " pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.609911 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cpg8\" (UniqueName: \"kubernetes.io/projected/0f19baaf-b832-4364-89bc-99ad74e0aae1-kube-api-access-9cpg8\") pod \"marketplace-operator-79b997595-tx7sj\" (UID: \"0f19baaf-b832-4364-89bc-99ad74e0aae1\") " pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" Dec 03 00:10:45 crc kubenswrapper[4811]: I1203 00:10:45.789795 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" Dec 03 00:10:46 crc kubenswrapper[4811]: I1203 00:10:46.001815 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tx7sj"] Dec 03 00:10:46 crc kubenswrapper[4811]: I1203 00:10:46.509037 4811 generic.go:334] "Generic (PLEG): container finished" podID="e02c207e-d2f6-4c42-8e80-8967413395c0" containerID="1e19cdf1de3461f8084eb4a7eff8624c5d40ecf2b256cb99d0aa0df212bdeb04" exitCode=0 Dec 03 00:10:46 crc kubenswrapper[4811]: I1203 00:10:46.509072 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfsfv" event={"ID":"e02c207e-d2f6-4c42-8e80-8967413395c0","Type":"ContainerDied","Data":"1e19cdf1de3461f8084eb4a7eff8624c5d40ecf2b256cb99d0aa0df212bdeb04"} Dec 03 00:10:46 crc kubenswrapper[4811]: I1203 00:10:46.511598 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" event={"ID":"0f19baaf-b832-4364-89bc-99ad74e0aae1","Type":"ContainerStarted","Data":"fa068d13735ad82d5d5269cd53be97681307f336816ca77ab1e9bcd41387244e"} Dec 03 00:10:46 crc kubenswrapper[4811]: I1203 00:10:46.956097 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.006115 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqbvj\" (UniqueName: \"kubernetes.io/projected/e02c207e-d2f6-4c42-8e80-8967413395c0-kube-api-access-wqbvj\") pod \"e02c207e-d2f6-4c42-8e80-8967413395c0\" (UID: \"e02c207e-d2f6-4c42-8e80-8967413395c0\") " Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.006343 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02c207e-d2f6-4c42-8e80-8967413395c0-utilities\") pod \"e02c207e-d2f6-4c42-8e80-8967413395c0\" (UID: \"e02c207e-d2f6-4c42-8e80-8967413395c0\") " Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.006376 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02c207e-d2f6-4c42-8e80-8967413395c0-catalog-content\") pod \"e02c207e-d2f6-4c42-8e80-8967413395c0\" (UID: \"e02c207e-d2f6-4c42-8e80-8967413395c0\") " Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.007252 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e02c207e-d2f6-4c42-8e80-8967413395c0-utilities" (OuterVolumeSpecName: "utilities") pod "e02c207e-d2f6-4c42-8e80-8967413395c0" (UID: "e02c207e-d2f6-4c42-8e80-8967413395c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.017826 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e02c207e-d2f6-4c42-8e80-8967413395c0-kube-api-access-wqbvj" (OuterVolumeSpecName: "kube-api-access-wqbvj") pod "e02c207e-d2f6-4c42-8e80-8967413395c0" (UID: "e02c207e-d2f6-4c42-8e80-8967413395c0"). InnerVolumeSpecName "kube-api-access-wqbvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.065880 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e02c207e-d2f6-4c42-8e80-8967413395c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e02c207e-d2f6-4c42-8e80-8967413395c0" (UID: "e02c207e-d2f6-4c42-8e80-8967413395c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.108165 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02c207e-d2f6-4c42-8e80-8967413395c0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.108231 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02c207e-d2f6-4c42-8e80-8967413395c0-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.108247 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqbvj\" (UniqueName: \"kubernetes.io/projected/e02c207e-d2f6-4c42-8e80-8967413395c0-kube-api-access-wqbvj\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.490745 4811 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-956mn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.490851 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-956mn" podUID="48ef7152-2ec1-4cfa-b2ab-88ff2fb42401" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.524107 4811 generic.go:334] "Generic (PLEG): container finished" podID="43747cdd-50ef-43df-b98d-a4d855984bb3" containerID="b7fe22ee668a14a52302ca06ae4348d8ce6079f5efc96d1abc135ea03322d3cc" exitCode=0 Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.524216 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ms8z7" event={"ID":"43747cdd-50ef-43df-b98d-a4d855984bb3","Type":"ContainerDied","Data":"b7fe22ee668a14a52302ca06ae4348d8ce6079f5efc96d1abc135ea03322d3cc"} Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.531585 4811 generic.go:334] "Generic (PLEG): container finished" podID="3f0c5586-e964-4734-a361-bcc6d34dfc8b" containerID="b5d327540158dfbbe80be66ef25b17a96b63276bb2b7f122e48e283d43d60c07" exitCode=0 Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.531681 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj4zn" event={"ID":"3f0c5586-e964-4734-a361-bcc6d34dfc8b","Type":"ContainerDied","Data":"b5d327540158dfbbe80be66ef25b17a96b63276bb2b7f122e48e283d43d60c07"} Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.550683 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" event={"ID":"0f19baaf-b832-4364-89bc-99ad74e0aae1","Type":"ContainerStarted","Data":"21ba016295ce70ff279aea465410793cafd2941eebd507aa6086ac871502a08d"} Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.551053 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.553552 4811 generic.go:334] "Generic (PLEG): container finished" podID="48ef7152-2ec1-4cfa-b2ab-88ff2fb42401" containerID="a1d2bc9c58af5f510b3f2b3ba17cc69195216f7ea8b34e87518c5221514d44dd" exitCode=0 Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.553658 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-956mn" event={"ID":"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401","Type":"ContainerDied","Data":"a1d2bc9c58af5f510b3f2b3ba17cc69195216f7ea8b34e87518c5221514d44dd"} Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.556139 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.557158 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfsfv" event={"ID":"e02c207e-d2f6-4c42-8e80-8967413395c0","Type":"ContainerDied","Data":"da5a9af32023b256e504d99d49b60651999f568367309954ead3611e89394bfd"} Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.557208 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfsfv" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.557274 4811 scope.go:117] "RemoveContainer" containerID="1e19cdf1de3461f8084eb4a7eff8624c5d40ecf2b256cb99d0aa0df212bdeb04" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.577479 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tx7sj" podStartSLOduration=2.57745812 podStartE2EDuration="2.57745812s" podCreationTimestamp="2025-12-03 00:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:10:47.577016198 +0000 UTC m=+287.718845670" watchObservedRunningTime="2025-12-03 00:10:47.57745812 +0000 UTC m=+287.719287592" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.585951 4811 generic.go:334] "Generic (PLEG): container finished" podID="41ca1166-555e-4be2-b998-59bad45528df" containerID="aa7f4d80e744bc98654863977f7c091a969f9f2ac95f63d12f0580072e19ce9d" exitCode=0 Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.586010 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqlcb" event={"ID":"41ca1166-555e-4be2-b998-59bad45528df","Type":"ContainerDied","Data":"aa7f4d80e744bc98654863977f7c091a969f9f2ac95f63d12f0580072e19ce9d"} Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.600133 4811 scope.go:117] "RemoveContainer" containerID="4499585db793f671a98c2a882fc7d685adb0b88f879d47ba810fda3b2dc23ad4" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.633357 4811 scope.go:117] "RemoveContainer" containerID="ccf7e8fbec21327a164a591e3948745bd9a1c63ebf6ab5bbb7669ee6d937152f" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.633701 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfsfv"] Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.651356 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sfsfv"] Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.769980 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.829227 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.832941 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-956mn" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.838922 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.848942 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41ca1166-555e-4be2-b998-59bad45528df-utilities\") pod \"41ca1166-555e-4be2-b998-59bad45528df\" (UID: \"41ca1166-555e-4be2-b998-59bad45528df\") " Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.849011 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnn7v\" (UniqueName: \"kubernetes.io/projected/41ca1166-555e-4be2-b998-59bad45528df-kube-api-access-hnn7v\") pod \"41ca1166-555e-4be2-b998-59bad45528df\" (UID: \"41ca1166-555e-4be2-b998-59bad45528df\") " Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.849037 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41ca1166-555e-4be2-b998-59bad45528df-catalog-content\") pod \"41ca1166-555e-4be2-b998-59bad45528df\" (UID: \"41ca1166-555e-4be2-b998-59bad45528df\") " Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.850498 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41ca1166-555e-4be2-b998-59bad45528df-utilities" (OuterVolumeSpecName: "utilities") pod "41ca1166-555e-4be2-b998-59bad45528df" (UID: "41ca1166-555e-4be2-b998-59bad45528df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.854302 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ca1166-555e-4be2-b998-59bad45528df-kube-api-access-hnn7v" (OuterVolumeSpecName: "kube-api-access-hnn7v") pod "41ca1166-555e-4be2-b998-59bad45528df" (UID: "41ca1166-555e-4be2-b998-59bad45528df"). InnerVolumeSpecName "kube-api-access-hnn7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.950528 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmfbq\" (UniqueName: \"kubernetes.io/projected/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-kube-api-access-mmfbq\") pod \"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401\" (UID: \"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401\") " Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.950630 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-marketplace-operator-metrics\") pod \"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401\" (UID: \"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401\") " Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.950675 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43747cdd-50ef-43df-b98d-a4d855984bb3-utilities\") pod \"43747cdd-50ef-43df-b98d-a4d855984bb3\" (UID: \"43747cdd-50ef-43df-b98d-a4d855984bb3\") " Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.950706 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-marketplace-trusted-ca\") pod \"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401\" (UID: \"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401\") " Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.950746 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0c5586-e964-4734-a361-bcc6d34dfc8b-catalog-content\") pod \"3f0c5586-e964-4734-a361-bcc6d34dfc8b\" (UID: \"3f0c5586-e964-4734-a361-bcc6d34dfc8b\") " Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.950768 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgmsf\" (UniqueName: \"kubernetes.io/projected/3f0c5586-e964-4734-a361-bcc6d34dfc8b-kube-api-access-hgmsf\") pod \"3f0c5586-e964-4734-a361-bcc6d34dfc8b\" (UID: \"3f0c5586-e964-4734-a361-bcc6d34dfc8b\") " Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.950863 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43747cdd-50ef-43df-b98d-a4d855984bb3-catalog-content\") pod \"43747cdd-50ef-43df-b98d-a4d855984bb3\" (UID: \"43747cdd-50ef-43df-b98d-a4d855984bb3\") " Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.950916 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v24ff\" (UniqueName: \"kubernetes.io/projected/43747cdd-50ef-43df-b98d-a4d855984bb3-kube-api-access-v24ff\") pod \"43747cdd-50ef-43df-b98d-a4d855984bb3\" (UID: \"43747cdd-50ef-43df-b98d-a4d855984bb3\") " Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.950963 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0c5586-e964-4734-a361-bcc6d34dfc8b-utilities\") pod \"3f0c5586-e964-4734-a361-bcc6d34dfc8b\" (UID: \"3f0c5586-e964-4734-a361-bcc6d34dfc8b\") " Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.951221 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41ca1166-555e-4be2-b998-59bad45528df-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.951235 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnn7v\" (UniqueName: \"kubernetes.io/projected/41ca1166-555e-4be2-b998-59bad45528df-kube-api-access-hnn7v\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.952283 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "48ef7152-2ec1-4cfa-b2ab-88ff2fb42401" (UID: "48ef7152-2ec1-4cfa-b2ab-88ff2fb42401"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.952309 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43747cdd-50ef-43df-b98d-a4d855984bb3-utilities" (OuterVolumeSpecName: "utilities") pod "43747cdd-50ef-43df-b98d-a4d855984bb3" (UID: "43747cdd-50ef-43df-b98d-a4d855984bb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.953298 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0c5586-e964-4734-a361-bcc6d34dfc8b-utilities" (OuterVolumeSpecName: "utilities") pod "3f0c5586-e964-4734-a361-bcc6d34dfc8b" (UID: "3f0c5586-e964-4734-a361-bcc6d34dfc8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.955823 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f0c5586-e964-4734-a361-bcc6d34dfc8b-kube-api-access-hgmsf" (OuterVolumeSpecName: "kube-api-access-hgmsf") pod "3f0c5586-e964-4734-a361-bcc6d34dfc8b" (UID: "3f0c5586-e964-4734-a361-bcc6d34dfc8b"). InnerVolumeSpecName "kube-api-access-hgmsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.956010 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-kube-api-access-mmfbq" (OuterVolumeSpecName: "kube-api-access-mmfbq") pod "48ef7152-2ec1-4cfa-b2ab-88ff2fb42401" (UID: "48ef7152-2ec1-4cfa-b2ab-88ff2fb42401"). InnerVolumeSpecName "kube-api-access-mmfbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.959767 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43747cdd-50ef-43df-b98d-a4d855984bb3-kube-api-access-v24ff" (OuterVolumeSpecName: "kube-api-access-v24ff") pod "43747cdd-50ef-43df-b98d-a4d855984bb3" (UID: "43747cdd-50ef-43df-b98d-a4d855984bb3"). InnerVolumeSpecName "kube-api-access-v24ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.964451 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "48ef7152-2ec1-4cfa-b2ab-88ff2fb42401" (UID: "48ef7152-2ec1-4cfa-b2ab-88ff2fb42401"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.971871 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41ca1166-555e-4be2-b998-59bad45528df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41ca1166-555e-4be2-b998-59bad45528df" (UID: "41ca1166-555e-4be2-b998-59bad45528df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:47 crc kubenswrapper[4811]: I1203 00:10:47.973199 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0c5586-e964-4734-a361-bcc6d34dfc8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f0c5586-e964-4734-a361-bcc6d34dfc8b" (UID: "3f0c5586-e964-4734-a361-bcc6d34dfc8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.003420 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43747cdd-50ef-43df-b98d-a4d855984bb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43747cdd-50ef-43df-b98d-a4d855984bb3" (UID: "43747cdd-50ef-43df-b98d-a4d855984bb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.052529 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41ca1166-555e-4be2-b998-59bad45528df-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.052557 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43747cdd-50ef-43df-b98d-a4d855984bb3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.052567 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v24ff\" (UniqueName: \"kubernetes.io/projected/43747cdd-50ef-43df-b98d-a4d855984bb3-kube-api-access-v24ff\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.052580 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0c5586-e964-4734-a361-bcc6d34dfc8b-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.052590 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmfbq\" (UniqueName: \"kubernetes.io/projected/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-kube-api-access-mmfbq\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.052598 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.052607 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43747cdd-50ef-43df-b98d-a4d855984bb3-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.052616 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.052625 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0c5586-e964-4734-a361-bcc6d34dfc8b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.052634 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgmsf\" (UniqueName: \"kubernetes.io/projected/3f0c5586-e964-4734-a361-bcc6d34dfc8b-kube-api-access-hgmsf\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.126786 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e02c207e-d2f6-4c42-8e80-8967413395c0" path="/var/lib/kubelet/pods/e02c207e-d2f6-4c42-8e80-8967413395c0/volumes" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.597016 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqlcb" event={"ID":"41ca1166-555e-4be2-b998-59bad45528df","Type":"ContainerDied","Data":"b877bd9a58e01c2f1dcb9cac4106e9982471bb8eb7432f2e3d1d1fc7223baab9"} Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.597108 4811 scope.go:117] "RemoveContainer" containerID="aa7f4d80e744bc98654863977f7c091a969f9f2ac95f63d12f0580072e19ce9d" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.597604 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqlcb" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.605019 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ms8z7" event={"ID":"43747cdd-50ef-43df-b98d-a4d855984bb3","Type":"ContainerDied","Data":"4f98333a99ca5aabf1c34b055ad08a4b0da6b85f24f3e158cfb700f34520f475"} Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.605059 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ms8z7" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.610623 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj4zn" event={"ID":"3f0c5586-e964-4734-a361-bcc6d34dfc8b","Type":"ContainerDied","Data":"74b75ad3a433aa02dd0cefd660a91967dd2519819679ef0ccae00aa24759d304"} Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.610654 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj4zn" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.615310 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-956mn" event={"ID":"48ef7152-2ec1-4cfa-b2ab-88ff2fb42401","Type":"ContainerDied","Data":"557d778a468e8212c985e7ef004ae55aded7f763531e6d7a01b45a0e8d6c8065"} Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.615442 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-956mn" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.629180 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bqlcb"] Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.637157 4811 scope.go:117] "RemoveContainer" containerID="2f6af9c994afa620181edd778102e2b2b073dd701286f236302e0d7fa0b07b4e" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.638217 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bqlcb"] Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.676848 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ms8z7"] Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.684397 4811 scope.go:117] "RemoveContainer" containerID="90fdba381f8ac502cd6781ddd513f3c9daa11f0db70399a1cfa58676ffcb653b" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.691730 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ms8z7"] Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.699891 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj4zn"] Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.708698 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj4zn"] Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.711052 4811 scope.go:117] "RemoveContainer" containerID="b7fe22ee668a14a52302ca06ae4348d8ce6079f5efc96d1abc135ea03322d3cc" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.712582 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-956mn"] Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.716121 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-956mn"] Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.741425 4811 scope.go:117] "RemoveContainer" containerID="295af902fb079365ec1da2d58423cd1c7ebe017aad8b9f2608ff4aad6020fc51" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.761193 4811 scope.go:117] "RemoveContainer" containerID="434eb47b63d7255af346c68a2f85afbccf09f0b2c4c9aeeadc7e0c737fded88b" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.782704 4811 scope.go:117] "RemoveContainer" containerID="b5d327540158dfbbe80be66ef25b17a96b63276bb2b7f122e48e283d43d60c07" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.804970 4811 scope.go:117] "RemoveContainer" containerID="d442f42eb337e48916250b3e54403de674c3dcf8e5652a176901c31bbbcdc71f" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.826753 4811 scope.go:117] "RemoveContainer" containerID="b1f76391188410635e90f935efd06f21f7a4aab0801583fe52d5107b0b2a3d47" Dec 03 00:10:48 crc kubenswrapper[4811]: I1203 00:10:48.843642 4811 scope.go:117] "RemoveContainer" containerID="a1d2bc9c58af5f510b3f2b3ba17cc69195216f7ea8b34e87518c5221514d44dd" Dec 03 00:10:50 crc kubenswrapper[4811]: I1203 00:10:50.127600 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f0c5586-e964-4734-a361-bcc6d34dfc8b" path="/var/lib/kubelet/pods/3f0c5586-e964-4734-a361-bcc6d34dfc8b/volumes" Dec 03 00:10:50 crc kubenswrapper[4811]: I1203 00:10:50.128776 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ca1166-555e-4be2-b998-59bad45528df" path="/var/lib/kubelet/pods/41ca1166-555e-4be2-b998-59bad45528df/volumes" Dec 03 00:10:50 crc kubenswrapper[4811]: I1203 00:10:50.130033 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43747cdd-50ef-43df-b98d-a4d855984bb3" path="/var/lib/kubelet/pods/43747cdd-50ef-43df-b98d-a4d855984bb3/volumes" Dec 03 00:10:50 crc kubenswrapper[4811]: I1203 00:10:50.131220 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ef7152-2ec1-4cfa-b2ab-88ff2fb42401" path="/var/lib/kubelet/pods/48ef7152-2ec1-4cfa-b2ab-88ff2fb42401/volumes" Dec 03 00:11:09 crc kubenswrapper[4811]: I1203 00:11:09.708113 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2mbnl"] Dec 03 00:11:09 crc kubenswrapper[4811]: I1203 00:11:09.710597 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" podUID="4c1aa648-c5db-406f-9d3e-bc1ab95d29c4" containerName="controller-manager" containerID="cri-o://4a0574a7014209775aa80e1a7f98a232c489ec2bab58a0b88bab8764ba72a24e" gracePeriod=30 Dec 03 00:11:09 crc kubenswrapper[4811]: I1203 00:11:09.821003 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9"] Dec 03 00:11:09 crc kubenswrapper[4811]: I1203 00:11:09.821777 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" podUID="a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7" containerName="route-controller-manager" containerID="cri-o://ca3d4fcd3e2c13ef6c72cd3a806cd31e5883760ed4a36f4d797e70345c03b0b7" gracePeriod=30 Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.084821 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.159129 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-client-ca\") pod \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.159224 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-config\") pod \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.159256 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-proxy-ca-bundles\") pod \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.159316 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl54d\" (UniqueName: \"kubernetes.io/projected/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-kube-api-access-pl54d\") pod \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.159354 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-serving-cert\") pod \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\" (UID: \"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4\") " Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.160437 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-client-ca" (OuterVolumeSpecName: "client-ca") pod "4c1aa648-c5db-406f-9d3e-bc1ab95d29c4" (UID: "4c1aa648-c5db-406f-9d3e-bc1ab95d29c4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.160577 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-config" (OuterVolumeSpecName: "config") pod "4c1aa648-c5db-406f-9d3e-bc1ab95d29c4" (UID: "4c1aa648-c5db-406f-9d3e-bc1ab95d29c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.160874 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4c1aa648-c5db-406f-9d3e-bc1ab95d29c4" (UID: "4c1aa648-c5db-406f-9d3e-bc1ab95d29c4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.166942 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-kube-api-access-pl54d" (OuterVolumeSpecName: "kube-api-access-pl54d") pod "4c1aa648-c5db-406f-9d3e-bc1ab95d29c4" (UID: "4c1aa648-c5db-406f-9d3e-bc1ab95d29c4"). InnerVolumeSpecName "kube-api-access-pl54d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.166972 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4c1aa648-c5db-406f-9d3e-bc1ab95d29c4" (UID: "4c1aa648-c5db-406f-9d3e-bc1ab95d29c4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.195315 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.260626 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-config\") pod \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.260714 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmmq6\" (UniqueName: \"kubernetes.io/projected/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-kube-api-access-tmmq6\") pod \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.260751 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-serving-cert\") pod \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.260794 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-client-ca\") pod \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\" (UID: \"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7\") " Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.261046 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.261064 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.261074 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl54d\" (UniqueName: \"kubernetes.io/projected/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-kube-api-access-pl54d\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.261083 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.261091 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.262101 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-client-ca" (OuterVolumeSpecName: "client-ca") pod "a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7" (UID: "a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.262156 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-config" (OuterVolumeSpecName: "config") pod "a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7" (UID: "a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.265355 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7" (UID: "a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.265523 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-kube-api-access-tmmq6" (OuterVolumeSpecName: "kube-api-access-tmmq6") pod "a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7" (UID: "a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7"). InnerVolumeSpecName "kube-api-access-tmmq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.362205 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmmq6\" (UniqueName: \"kubernetes.io/projected/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-kube-api-access-tmmq6\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.362292 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.362311 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.362331 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.747295 4811 generic.go:334] "Generic (PLEG): container finished" podID="a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7" containerID="ca3d4fcd3e2c13ef6c72cd3a806cd31e5883760ed4a36f4d797e70345c03b0b7" exitCode=0 Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.747361 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.747390 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" event={"ID":"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7","Type":"ContainerDied","Data":"ca3d4fcd3e2c13ef6c72cd3a806cd31e5883760ed4a36f4d797e70345c03b0b7"} Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.747428 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9" event={"ID":"a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7","Type":"ContainerDied","Data":"b065da9920d8f7e2737b587aec2a9e30f878f5dc93d6197e817c971b1011808f"} Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.747450 4811 scope.go:117] "RemoveContainer" containerID="ca3d4fcd3e2c13ef6c72cd3a806cd31e5883760ed4a36f4d797e70345c03b0b7" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.750315 4811 generic.go:334] "Generic (PLEG): container finished" podID="4c1aa648-c5db-406f-9d3e-bc1ab95d29c4" containerID="4a0574a7014209775aa80e1a7f98a232c489ec2bab58a0b88bab8764ba72a24e" exitCode=0 Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.750363 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" event={"ID":"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4","Type":"ContainerDied","Data":"4a0574a7014209775aa80e1a7f98a232c489ec2bab58a0b88bab8764ba72a24e"} Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.750531 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" event={"ID":"4c1aa648-c5db-406f-9d3e-bc1ab95d29c4","Type":"ContainerDied","Data":"9c3909e843a5beff12872ddf9bbbac286b42e6a050bc9689d17b7fb18396412b"} Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.751118 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2mbnl" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.768239 4811 scope.go:117] "RemoveContainer" containerID="ca3d4fcd3e2c13ef6c72cd3a806cd31e5883760ed4a36f4d797e70345c03b0b7" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.768859 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca3d4fcd3e2c13ef6c72cd3a806cd31e5883760ed4a36f4d797e70345c03b0b7\": container with ID starting with ca3d4fcd3e2c13ef6c72cd3a806cd31e5883760ed4a36f4d797e70345c03b0b7 not found: ID does not exist" containerID="ca3d4fcd3e2c13ef6c72cd3a806cd31e5883760ed4a36f4d797e70345c03b0b7" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.768914 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca3d4fcd3e2c13ef6c72cd3a806cd31e5883760ed4a36f4d797e70345c03b0b7"} err="failed to get container status \"ca3d4fcd3e2c13ef6c72cd3a806cd31e5883760ed4a36f4d797e70345c03b0b7\": rpc error: code = NotFound desc = could not find container \"ca3d4fcd3e2c13ef6c72cd3a806cd31e5883760ed4a36f4d797e70345c03b0b7\": container with ID starting with ca3d4fcd3e2c13ef6c72cd3a806cd31e5883760ed4a36f4d797e70345c03b0b7 not found: ID does not exist" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.768946 4811 scope.go:117] "RemoveContainer" containerID="4a0574a7014209775aa80e1a7f98a232c489ec2bab58a0b88bab8764ba72a24e" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.787852 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9"] Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.789971 4811 scope.go:117] "RemoveContainer" containerID="4a0574a7014209775aa80e1a7f98a232c489ec2bab58a0b88bab8764ba72a24e" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.790868 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a0574a7014209775aa80e1a7f98a232c489ec2bab58a0b88bab8764ba72a24e\": container with ID starting with 4a0574a7014209775aa80e1a7f98a232c489ec2bab58a0b88bab8764ba72a24e not found: ID does not exist" containerID="4a0574a7014209775aa80e1a7f98a232c489ec2bab58a0b88bab8764ba72a24e" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.790974 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a0574a7014209775aa80e1a7f98a232c489ec2bab58a0b88bab8764ba72a24e"} err="failed to get container status \"4a0574a7014209775aa80e1a7f98a232c489ec2bab58a0b88bab8764ba72a24e\": rpc error: code = NotFound desc = could not find container \"4a0574a7014209775aa80e1a7f98a232c489ec2bab58a0b88bab8764ba72a24e\": container with ID starting with 4a0574a7014209775aa80e1a7f98a232c489ec2bab58a0b88bab8764ba72a24e not found: ID does not exist" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.791683 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pdrt9"] Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.806839 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2mbnl"] Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.809738 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2mbnl"] Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.929917 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-796b8686b6-d7ptp"] Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.930332 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02c207e-d2f6-4c42-8e80-8967413395c0" containerName="registry-server" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930350 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02c207e-d2f6-4c42-8e80-8967413395c0" containerName="registry-server" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.930363 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ca1166-555e-4be2-b998-59bad45528df" containerName="extract-utilities" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930370 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ca1166-555e-4be2-b998-59bad45528df" containerName="extract-utilities" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.930420 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0c5586-e964-4734-a361-bcc6d34dfc8b" containerName="extract-utilities" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930432 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0c5586-e964-4734-a361-bcc6d34dfc8b" containerName="extract-utilities" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.930443 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02c207e-d2f6-4c42-8e80-8967413395c0" containerName="extract-content" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930451 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02c207e-d2f6-4c42-8e80-8967413395c0" containerName="extract-content" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.930464 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43747cdd-50ef-43df-b98d-a4d855984bb3" containerName="extract-utilities" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930471 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="43747cdd-50ef-43df-b98d-a4d855984bb3" containerName="extract-utilities" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.930483 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ca1166-555e-4be2-b998-59bad45528df" containerName="extract-content" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930489 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ca1166-555e-4be2-b998-59bad45528df" containerName="extract-content" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.930502 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1aa648-c5db-406f-9d3e-bc1ab95d29c4" containerName="controller-manager" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930510 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1aa648-c5db-406f-9d3e-bc1ab95d29c4" containerName="controller-manager" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.930519 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0c5586-e964-4734-a361-bcc6d34dfc8b" containerName="registry-server" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930528 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0c5586-e964-4734-a361-bcc6d34dfc8b" containerName="registry-server" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.930538 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7" containerName="route-controller-manager" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930545 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7" containerName="route-controller-manager" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.930555 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ca1166-555e-4be2-b998-59bad45528df" containerName="registry-server" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930562 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ca1166-555e-4be2-b998-59bad45528df" containerName="registry-server" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.930581 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ef7152-2ec1-4cfa-b2ab-88ff2fb42401" containerName="marketplace-operator" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930588 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ef7152-2ec1-4cfa-b2ab-88ff2fb42401" containerName="marketplace-operator" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.930598 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02c207e-d2f6-4c42-8e80-8967413395c0" containerName="extract-utilities" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930605 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02c207e-d2f6-4c42-8e80-8967413395c0" containerName="extract-utilities" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.930613 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43747cdd-50ef-43df-b98d-a4d855984bb3" containerName="extract-content" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930621 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="43747cdd-50ef-43df-b98d-a4d855984bb3" containerName="extract-content" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.930631 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0c5586-e964-4734-a361-bcc6d34dfc8b" containerName="extract-content" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930637 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0c5586-e964-4734-a361-bcc6d34dfc8b" containerName="extract-content" Dec 03 00:11:10 crc kubenswrapper[4811]: E1203 00:11:10.930651 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43747cdd-50ef-43df-b98d-a4d855984bb3" containerName="registry-server" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930659 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="43747cdd-50ef-43df-b98d-a4d855984bb3" containerName="registry-server" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930815 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e02c207e-d2f6-4c42-8e80-8967413395c0" containerName="registry-server" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930834 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7" containerName="route-controller-manager" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930848 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1aa648-c5db-406f-9d3e-bc1ab95d29c4" containerName="controller-manager" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930859 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f0c5586-e964-4734-a361-bcc6d34dfc8b" containerName="registry-server" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930868 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="43747cdd-50ef-43df-b98d-a4d855984bb3" containerName="registry-server" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930878 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ef7152-2ec1-4cfa-b2ab-88ff2fb42401" containerName="marketplace-operator" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.930890 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ca1166-555e-4be2-b998-59bad45528df" containerName="registry-server" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.931537 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.933584 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.934197 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.934328 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.934445 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.934535 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.937869 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv"] Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.939209 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.941135 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.941427 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.944574 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.944658 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.944975 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.945408 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.946000 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.946410 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.952302 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv"] Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.961924 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-796b8686b6-d7ptp"] Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.970775 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcrvf\" (UniqueName: \"kubernetes.io/projected/12ada5f5-e0c5-446b-9309-1cca5d876d21-kube-api-access-bcrvf\") pod \"route-controller-manager-58fdc6fc78-gcjzv\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.970827 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-client-ca\") pod \"controller-manager-796b8686b6-d7ptp\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.970857 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr9hw\" (UniqueName: \"kubernetes.io/projected/55e10fd8-ce24-4f7b-a299-40b52d542e54-kube-api-access-sr9hw\") pod \"controller-manager-796b8686b6-d7ptp\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.970883 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-config\") pod \"controller-manager-796b8686b6-d7ptp\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.970901 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55e10fd8-ce24-4f7b-a299-40b52d542e54-serving-cert\") pod \"controller-manager-796b8686b6-d7ptp\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.970918 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12ada5f5-e0c5-446b-9309-1cca5d876d21-serving-cert\") pod \"route-controller-manager-58fdc6fc78-gcjzv\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.970933 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-proxy-ca-bundles\") pod \"controller-manager-796b8686b6-d7ptp\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.970962 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12ada5f5-e0c5-446b-9309-1cca5d876d21-config\") pod \"route-controller-manager-58fdc6fc78-gcjzv\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:10 crc kubenswrapper[4811]: I1203 00:11:10.970986 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12ada5f5-e0c5-446b-9309-1cca5d876d21-client-ca\") pod \"route-controller-manager-58fdc6fc78-gcjzv\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.033357 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv"] Dec 03 00:11:11 crc kubenswrapper[4811]: E1203 00:11:11.033809 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-bcrvf serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" podUID="12ada5f5-e0c5-446b-9309-1cca5d876d21" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.036917 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-796b8686b6-d7ptp"] Dec 03 00:11:11 crc kubenswrapper[4811]: E1203 00:11:11.037061 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-sr9hw proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" podUID="55e10fd8-ce24-4f7b-a299-40b52d542e54" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.072870 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr9hw\" (UniqueName: \"kubernetes.io/projected/55e10fd8-ce24-4f7b-a299-40b52d542e54-kube-api-access-sr9hw\") pod \"controller-manager-796b8686b6-d7ptp\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.072960 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-config\") pod \"controller-manager-796b8686b6-d7ptp\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.073005 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12ada5f5-e0c5-446b-9309-1cca5d876d21-serving-cert\") pod \"route-controller-manager-58fdc6fc78-gcjzv\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.073030 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55e10fd8-ce24-4f7b-a299-40b52d542e54-serving-cert\") pod \"controller-manager-796b8686b6-d7ptp\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.073051 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-proxy-ca-bundles\") pod \"controller-manager-796b8686b6-d7ptp\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.073084 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12ada5f5-e0c5-446b-9309-1cca5d876d21-config\") pod \"route-controller-manager-58fdc6fc78-gcjzv\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.073115 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12ada5f5-e0c5-446b-9309-1cca5d876d21-client-ca\") pod \"route-controller-manager-58fdc6fc78-gcjzv\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.073163 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcrvf\" (UniqueName: \"kubernetes.io/projected/12ada5f5-e0c5-446b-9309-1cca5d876d21-kube-api-access-bcrvf\") pod \"route-controller-manager-58fdc6fc78-gcjzv\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.073193 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-client-ca\") pod \"controller-manager-796b8686b6-d7ptp\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.074537 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-client-ca\") pod \"controller-manager-796b8686b6-d7ptp\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.076068 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-config\") pod \"controller-manager-796b8686b6-d7ptp\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.077365 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12ada5f5-e0c5-446b-9309-1cca5d876d21-config\") pod \"route-controller-manager-58fdc6fc78-gcjzv\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.078657 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12ada5f5-e0c5-446b-9309-1cca5d876d21-client-ca\") pod \"route-controller-manager-58fdc6fc78-gcjzv\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.081102 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-proxy-ca-bundles\") pod \"controller-manager-796b8686b6-d7ptp\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.081969 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55e10fd8-ce24-4f7b-a299-40b52d542e54-serving-cert\") pod \"controller-manager-796b8686b6-d7ptp\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.083214 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12ada5f5-e0c5-446b-9309-1cca5d876d21-serving-cert\") pod \"route-controller-manager-58fdc6fc78-gcjzv\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.100434 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcrvf\" (UniqueName: \"kubernetes.io/projected/12ada5f5-e0c5-446b-9309-1cca5d876d21-kube-api-access-bcrvf\") pod \"route-controller-manager-58fdc6fc78-gcjzv\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.103076 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr9hw\" (UniqueName: \"kubernetes.io/projected/55e10fd8-ce24-4f7b-a299-40b52d542e54-kube-api-access-sr9hw\") pod \"controller-manager-796b8686b6-d7ptp\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.757612 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.757627 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.766878 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.772565 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.782148 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcrvf\" (UniqueName: \"kubernetes.io/projected/12ada5f5-e0c5-446b-9309-1cca5d876d21-kube-api-access-bcrvf\") pod \"12ada5f5-e0c5-446b-9309-1cca5d876d21\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.782211 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12ada5f5-e0c5-446b-9309-1cca5d876d21-client-ca\") pod \"12ada5f5-e0c5-446b-9309-1cca5d876d21\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.782332 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12ada5f5-e0c5-446b-9309-1cca5d876d21-serving-cert\") pod \"12ada5f5-e0c5-446b-9309-1cca5d876d21\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.782462 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12ada5f5-e0c5-446b-9309-1cca5d876d21-config\") pod \"12ada5f5-e0c5-446b-9309-1cca5d876d21\" (UID: \"12ada5f5-e0c5-446b-9309-1cca5d876d21\") " Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.782693 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12ada5f5-e0c5-446b-9309-1cca5d876d21-client-ca" (OuterVolumeSpecName: "client-ca") pod "12ada5f5-e0c5-446b-9309-1cca5d876d21" (UID: "12ada5f5-e0c5-446b-9309-1cca5d876d21"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.783068 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12ada5f5-e0c5-446b-9309-1cca5d876d21-config" (OuterVolumeSpecName: "config") pod "12ada5f5-e0c5-446b-9309-1cca5d876d21" (UID: "12ada5f5-e0c5-446b-9309-1cca5d876d21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.783077 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12ada5f5-e0c5-446b-9309-1cca5d876d21-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.785458 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12ada5f5-e0c5-446b-9309-1cca5d876d21-kube-api-access-bcrvf" (OuterVolumeSpecName: "kube-api-access-bcrvf") pod "12ada5f5-e0c5-446b-9309-1cca5d876d21" (UID: "12ada5f5-e0c5-446b-9309-1cca5d876d21"). InnerVolumeSpecName "kube-api-access-bcrvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.790901 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12ada5f5-e0c5-446b-9309-1cca5d876d21-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "12ada5f5-e0c5-446b-9309-1cca5d876d21" (UID: "12ada5f5-e0c5-446b-9309-1cca5d876d21"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.883946 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55e10fd8-ce24-4f7b-a299-40b52d542e54-serving-cert\") pod \"55e10fd8-ce24-4f7b-a299-40b52d542e54\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.884022 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-proxy-ca-bundles\") pod \"55e10fd8-ce24-4f7b-a299-40b52d542e54\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.884049 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr9hw\" (UniqueName: \"kubernetes.io/projected/55e10fd8-ce24-4f7b-a299-40b52d542e54-kube-api-access-sr9hw\") pod \"55e10fd8-ce24-4f7b-a299-40b52d542e54\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.884072 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-config\") pod \"55e10fd8-ce24-4f7b-a299-40b52d542e54\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.884189 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-client-ca\") pod \"55e10fd8-ce24-4f7b-a299-40b52d542e54\" (UID: \"55e10fd8-ce24-4f7b-a299-40b52d542e54\") " Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.884440 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12ada5f5-e0c5-446b-9309-1cca5d876d21-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.884460 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12ada5f5-e0c5-446b-9309-1cca5d876d21-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.884471 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcrvf\" (UniqueName: \"kubernetes.io/projected/12ada5f5-e0c5-446b-9309-1cca5d876d21-kube-api-access-bcrvf\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.884729 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "55e10fd8-ce24-4f7b-a299-40b52d542e54" (UID: "55e10fd8-ce24-4f7b-a299-40b52d542e54"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.884883 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-client-ca" (OuterVolumeSpecName: "client-ca") pod "55e10fd8-ce24-4f7b-a299-40b52d542e54" (UID: "55e10fd8-ce24-4f7b-a299-40b52d542e54"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.884997 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-config" (OuterVolumeSpecName: "config") pod "55e10fd8-ce24-4f7b-a299-40b52d542e54" (UID: "55e10fd8-ce24-4f7b-a299-40b52d542e54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.888566 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55e10fd8-ce24-4f7b-a299-40b52d542e54-kube-api-access-sr9hw" (OuterVolumeSpecName: "kube-api-access-sr9hw") pod "55e10fd8-ce24-4f7b-a299-40b52d542e54" (UID: "55e10fd8-ce24-4f7b-a299-40b52d542e54"). InnerVolumeSpecName "kube-api-access-sr9hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.888860 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e10fd8-ce24-4f7b-a299-40b52d542e54-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "55e10fd8-ce24-4f7b-a299-40b52d542e54" (UID: "55e10fd8-ce24-4f7b-a299-40b52d542e54"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.985848 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.985899 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55e10fd8-ce24-4f7b-a299-40b52d542e54-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.985912 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.985929 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr9hw\" (UniqueName: \"kubernetes.io/projected/55e10fd8-ce24-4f7b-a299-40b52d542e54-kube-api-access-sr9hw\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:11 crc kubenswrapper[4811]: I1203 00:11:11.985942 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55e10fd8-ce24-4f7b-a299-40b52d542e54-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.128708 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1aa648-c5db-406f-9d3e-bc1ab95d29c4" path="/var/lib/kubelet/pods/4c1aa648-c5db-406f-9d3e-bc1ab95d29c4/volumes" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.130346 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7" path="/var/lib/kubelet/pods/a3ecb0f3-7fb4-4af9-b08e-7efa7e8851f7/volumes" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.764089 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.764211 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b8686b6-d7ptp" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.838949 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-796b8686b6-d7ptp"] Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.858838 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-598844dbc9-gtjm6"] Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.864386 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.864452 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-796b8686b6-d7ptp"] Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.870769 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.871079 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.871470 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.871496 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.871911 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.874241 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.875636 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598844dbc9-gtjm6"] Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.885821 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.897746 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-proxy-ca-bundles\") pod \"controller-manager-598844dbc9-gtjm6\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.897816 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-config\") pod \"controller-manager-598844dbc9-gtjm6\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.897840 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-client-ca\") pod \"controller-manager-598844dbc9-gtjm6\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.897882 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-serving-cert\") pod \"controller-manager-598844dbc9-gtjm6\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.897912 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhfz7\" (UniqueName: \"kubernetes.io/projected/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-kube-api-access-nhfz7\") pod \"controller-manager-598844dbc9-gtjm6\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.900348 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv"] Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.905391 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fdc6fc78-gcjzv"] Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.998760 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-proxy-ca-bundles\") pod \"controller-manager-598844dbc9-gtjm6\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.998868 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-client-ca\") pod \"controller-manager-598844dbc9-gtjm6\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.998903 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-config\") pod \"controller-manager-598844dbc9-gtjm6\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.998939 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-serving-cert\") pod \"controller-manager-598844dbc9-gtjm6\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.998960 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhfz7\" (UniqueName: \"kubernetes.io/projected/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-kube-api-access-nhfz7\") pod \"controller-manager-598844dbc9-gtjm6\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:12 crc kubenswrapper[4811]: I1203 00:11:12.999883 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-client-ca\") pod \"controller-manager-598844dbc9-gtjm6\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:13 crc kubenswrapper[4811]: I1203 00:11:13.000407 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-config\") pod \"controller-manager-598844dbc9-gtjm6\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:13 crc kubenswrapper[4811]: I1203 00:11:13.000758 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-proxy-ca-bundles\") pod \"controller-manager-598844dbc9-gtjm6\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:13 crc kubenswrapper[4811]: I1203 00:11:13.008817 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-serving-cert\") pod \"controller-manager-598844dbc9-gtjm6\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:13 crc kubenswrapper[4811]: I1203 00:11:13.019805 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhfz7\" (UniqueName: \"kubernetes.io/projected/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-kube-api-access-nhfz7\") pod \"controller-manager-598844dbc9-gtjm6\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:13 crc kubenswrapper[4811]: I1203 00:11:13.185642 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:13 crc kubenswrapper[4811]: I1203 00:11:13.378779 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598844dbc9-gtjm6"] Dec 03 00:11:13 crc kubenswrapper[4811]: W1203 00:11:13.387413 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod781c6c8d_dcd0_41c7_90a8_c015fcb36e46.slice/crio-81d9944bdbcb012e203bda7fcdacc2c306e0afde71e9b510ebb535564af70927 WatchSource:0}: Error finding container 81d9944bdbcb012e203bda7fcdacc2c306e0afde71e9b510ebb535564af70927: Status 404 returned error can't find the container with id 81d9944bdbcb012e203bda7fcdacc2c306e0afde71e9b510ebb535564af70927 Dec 03 00:11:13 crc kubenswrapper[4811]: I1203 00:11:13.773947 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" event={"ID":"781c6c8d-dcd0-41c7-90a8-c015fcb36e46","Type":"ContainerStarted","Data":"81d9944bdbcb012e203bda7fcdacc2c306e0afde71e9b510ebb535564af70927"} Dec 03 00:11:14 crc kubenswrapper[4811]: I1203 00:11:14.121389 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12ada5f5-e0c5-446b-9309-1cca5d876d21" path="/var/lib/kubelet/pods/12ada5f5-e0c5-446b-9309-1cca5d876d21/volumes" Dec 03 00:11:14 crc kubenswrapper[4811]: I1203 00:11:14.121847 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55e10fd8-ce24-4f7b-a299-40b52d542e54" path="/var/lib/kubelet/pods/55e10fd8-ce24-4f7b-a299-40b52d542e54/volumes" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.332607 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw"] Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.333926 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.339037 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.339380 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.339474 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.339480 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.339515 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.339628 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.351329 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw"] Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.439529 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vhc7\" (UniqueName: \"kubernetes.io/projected/a4ea77ac-8a89-4ed0-bff5-aefac2874345-kube-api-access-7vhc7\") pod \"route-controller-manager-65b45d8f5-v5bdw\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.439625 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4ea77ac-8a89-4ed0-bff5-aefac2874345-client-ca\") pod \"route-controller-manager-65b45d8f5-v5bdw\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.439734 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4ea77ac-8a89-4ed0-bff5-aefac2874345-serving-cert\") pod \"route-controller-manager-65b45d8f5-v5bdw\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.439791 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ea77ac-8a89-4ed0-bff5-aefac2874345-config\") pod \"route-controller-manager-65b45d8f5-v5bdw\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.540531 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vhc7\" (UniqueName: \"kubernetes.io/projected/a4ea77ac-8a89-4ed0-bff5-aefac2874345-kube-api-access-7vhc7\") pod \"route-controller-manager-65b45d8f5-v5bdw\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.540623 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4ea77ac-8a89-4ed0-bff5-aefac2874345-client-ca\") pod \"route-controller-manager-65b45d8f5-v5bdw\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.540672 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4ea77ac-8a89-4ed0-bff5-aefac2874345-serving-cert\") pod \"route-controller-manager-65b45d8f5-v5bdw\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.540712 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ea77ac-8a89-4ed0-bff5-aefac2874345-config\") pod \"route-controller-manager-65b45d8f5-v5bdw\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.542613 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4ea77ac-8a89-4ed0-bff5-aefac2874345-client-ca\") pod \"route-controller-manager-65b45d8f5-v5bdw\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.542909 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ea77ac-8a89-4ed0-bff5-aefac2874345-config\") pod \"route-controller-manager-65b45d8f5-v5bdw\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.556442 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4ea77ac-8a89-4ed0-bff5-aefac2874345-serving-cert\") pod \"route-controller-manager-65b45d8f5-v5bdw\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.567647 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vhc7\" (UniqueName: \"kubernetes.io/projected/a4ea77ac-8a89-4ed0-bff5-aefac2874345-kube-api-access-7vhc7\") pod \"route-controller-manager-65b45d8f5-v5bdw\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:15 crc kubenswrapper[4811]: I1203 00:11:15.650717 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:16 crc kubenswrapper[4811]: I1203 00:11:16.103161 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw"] Dec 03 00:11:16 crc kubenswrapper[4811]: W1203 00:11:16.114336 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4ea77ac_8a89_4ed0_bff5_aefac2874345.slice/crio-1f094f5231d9826fe782f7d5b2d3853fa6d5af289cee13c0aff70b44cf3c6f45 WatchSource:0}: Error finding container 1f094f5231d9826fe782f7d5b2d3853fa6d5af289cee13c0aff70b44cf3c6f45: Status 404 returned error can't find the container with id 1f094f5231d9826fe782f7d5b2d3853fa6d5af289cee13c0aff70b44cf3c6f45 Dec 03 00:11:16 crc kubenswrapper[4811]: I1203 00:11:16.802452 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" event={"ID":"a4ea77ac-8a89-4ed0-bff5-aefac2874345","Type":"ContainerStarted","Data":"1f094f5231d9826fe782f7d5b2d3853fa6d5af289cee13c0aff70b44cf3c6f45"} Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.345798 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n7ftt"] Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.348094 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7ftt" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.351391 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7ftt"] Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.351428 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.391969 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69dda76c-6613-4f85-98cd-2597a053c1cb-catalog-content\") pod \"redhat-operators-n7ftt\" (UID: \"69dda76c-6613-4f85-98cd-2597a053c1cb\") " pod="openshift-marketplace/redhat-operators-n7ftt" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.392053 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69dda76c-6613-4f85-98cd-2597a053c1cb-utilities\") pod \"redhat-operators-n7ftt\" (UID: \"69dda76c-6613-4f85-98cd-2597a053c1cb\") " pod="openshift-marketplace/redhat-operators-n7ftt" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.392148 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lkkq\" (UniqueName: \"kubernetes.io/projected/69dda76c-6613-4f85-98cd-2597a053c1cb-kube-api-access-7lkkq\") pod \"redhat-operators-n7ftt\" (UID: \"69dda76c-6613-4f85-98cd-2597a053c1cb\") " pod="openshift-marketplace/redhat-operators-n7ftt" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.492879 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lkkq\" (UniqueName: \"kubernetes.io/projected/69dda76c-6613-4f85-98cd-2597a053c1cb-kube-api-access-7lkkq\") pod \"redhat-operators-n7ftt\" (UID: \"69dda76c-6613-4f85-98cd-2597a053c1cb\") " pod="openshift-marketplace/redhat-operators-n7ftt" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.492955 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69dda76c-6613-4f85-98cd-2597a053c1cb-catalog-content\") pod \"redhat-operators-n7ftt\" (UID: \"69dda76c-6613-4f85-98cd-2597a053c1cb\") " pod="openshift-marketplace/redhat-operators-n7ftt" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.492983 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69dda76c-6613-4f85-98cd-2597a053c1cb-utilities\") pod \"redhat-operators-n7ftt\" (UID: \"69dda76c-6613-4f85-98cd-2597a053c1cb\") " pod="openshift-marketplace/redhat-operators-n7ftt" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.493646 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69dda76c-6613-4f85-98cd-2597a053c1cb-utilities\") pod \"redhat-operators-n7ftt\" (UID: \"69dda76c-6613-4f85-98cd-2597a053c1cb\") " pod="openshift-marketplace/redhat-operators-n7ftt" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.494245 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69dda76c-6613-4f85-98cd-2597a053c1cb-catalog-content\") pod \"redhat-operators-n7ftt\" (UID: \"69dda76c-6613-4f85-98cd-2597a053c1cb\") " pod="openshift-marketplace/redhat-operators-n7ftt" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.530138 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lkkq\" (UniqueName: \"kubernetes.io/projected/69dda76c-6613-4f85-98cd-2597a053c1cb-kube-api-access-7lkkq\") pod \"redhat-operators-n7ftt\" (UID: \"69dda76c-6613-4f85-98cd-2597a053c1cb\") " pod="openshift-marketplace/redhat-operators-n7ftt" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.542035 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kj29q"] Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.543552 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.545554 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.593657 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kj29q"] Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.595632 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-utilities\") pod \"redhat-marketplace-kj29q\" (UID: \"b71b215c-c0c4-49e6-aa06-a4025a1fd22d\") " pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.595841 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ggqk\" (UniqueName: \"kubernetes.io/projected/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-kube-api-access-5ggqk\") pod \"redhat-marketplace-kj29q\" (UID: \"b71b215c-c0c4-49e6-aa06-a4025a1fd22d\") " pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.596018 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-catalog-content\") pod \"redhat-marketplace-kj29q\" (UID: \"b71b215c-c0c4-49e6-aa06-a4025a1fd22d\") " pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.667712 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7ftt" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.697627 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-utilities\") pod \"redhat-marketplace-kj29q\" (UID: \"b71b215c-c0c4-49e6-aa06-a4025a1fd22d\") " pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.697701 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ggqk\" (UniqueName: \"kubernetes.io/projected/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-kube-api-access-5ggqk\") pod \"redhat-marketplace-kj29q\" (UID: \"b71b215c-c0c4-49e6-aa06-a4025a1fd22d\") " pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.697752 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-catalog-content\") pod \"redhat-marketplace-kj29q\" (UID: \"b71b215c-c0c4-49e6-aa06-a4025a1fd22d\") " pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.698082 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-utilities\") pod \"redhat-marketplace-kj29q\" (UID: \"b71b215c-c0c4-49e6-aa06-a4025a1fd22d\") " pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.698149 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-catalog-content\") pod \"redhat-marketplace-kj29q\" (UID: \"b71b215c-c0c4-49e6-aa06-a4025a1fd22d\") " pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.716723 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ggqk\" (UniqueName: \"kubernetes.io/projected/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-kube-api-access-5ggqk\") pod \"redhat-marketplace-kj29q\" (UID: \"b71b215c-c0c4-49e6-aa06-a4025a1fd22d\") " pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.822791 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" event={"ID":"781c6c8d-dcd0-41c7-90a8-c015fcb36e46","Type":"ContainerStarted","Data":"997d101c566189bffab6e15574b47380dce22159d64c67b116513e629a2f2a53"} Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.823487 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.825028 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" event={"ID":"a4ea77ac-8a89-4ed0-bff5-aefac2874345","Type":"ContainerStarted","Data":"b5974f3008d8413e8e87e4d6b0ea5440d4988815e65d8c1e36b638bf27a15edf"} Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.827671 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.830443 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.832861 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.844762 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" podStartSLOduration=8.844729706 podStartE2EDuration="8.844729706s" podCreationTimestamp="2025-12-03 00:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:11:19.839123617 +0000 UTC m=+319.980953089" watchObservedRunningTime="2025-12-03 00:11:19.844729706 +0000 UTC m=+319.986559178" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.868150 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.868539 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" podStartSLOduration=8.868512797 podStartE2EDuration="8.868512797s" podCreationTimestamp="2025-12-03 00:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:11:19.867575565 +0000 UTC m=+320.009405157" watchObservedRunningTime="2025-12-03 00:11:19.868512797 +0000 UTC m=+320.010342269" Dec 03 00:11:19 crc kubenswrapper[4811]: I1203 00:11:19.890860 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7ftt"] Dec 03 00:11:19 crc kubenswrapper[4811]: W1203 00:11:19.897318 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69dda76c_6613_4f85_98cd_2597a053c1cb.slice/crio-665ad2ba7cafcf3bcdeb7f7bee2a2febb28b3a2b59b2d708be62a40735172668 WatchSource:0}: Error finding container 665ad2ba7cafcf3bcdeb7f7bee2a2febb28b3a2b59b2d708be62a40735172668: Status 404 returned error can't find the container with id 665ad2ba7cafcf3bcdeb7f7bee2a2febb28b3a2b59b2d708be62a40735172668 Dec 03 00:11:20 crc kubenswrapper[4811]: I1203 00:11:20.105853 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kj29q"] Dec 03 00:11:20 crc kubenswrapper[4811]: I1203 00:11:20.832425 4811 generic.go:334] "Generic (PLEG): container finished" podID="b71b215c-c0c4-49e6-aa06-a4025a1fd22d" containerID="a6e8652dbed505a632d4b7be6971d2a64f7fe04a5854a9ba47a8a7f9591c210f" exitCode=0 Dec 03 00:11:20 crc kubenswrapper[4811]: I1203 00:11:20.832520 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kj29q" event={"ID":"b71b215c-c0c4-49e6-aa06-a4025a1fd22d","Type":"ContainerDied","Data":"a6e8652dbed505a632d4b7be6971d2a64f7fe04a5854a9ba47a8a7f9591c210f"} Dec 03 00:11:20 crc kubenswrapper[4811]: I1203 00:11:20.834470 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kj29q" event={"ID":"b71b215c-c0c4-49e6-aa06-a4025a1fd22d","Type":"ContainerStarted","Data":"f7c31e1cbea71ccc88b75d47d306f3d60759080a7462ff907d75de84e2312cf7"} Dec 03 00:11:20 crc kubenswrapper[4811]: I1203 00:11:20.837018 4811 generic.go:334] "Generic (PLEG): container finished" podID="69dda76c-6613-4f85-98cd-2597a053c1cb" containerID="5b1c455aaddd39f39c79aac24bc0904e1223cc197f6083311ce50af4319e3b6b" exitCode=0 Dec 03 00:11:20 crc kubenswrapper[4811]: I1203 00:11:20.837968 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7ftt" event={"ID":"69dda76c-6613-4f85-98cd-2597a053c1cb","Type":"ContainerDied","Data":"5b1c455aaddd39f39c79aac24bc0904e1223cc197f6083311ce50af4319e3b6b"} Dec 03 00:11:20 crc kubenswrapper[4811]: I1203 00:11:20.838058 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7ftt" event={"ID":"69dda76c-6613-4f85-98cd-2597a053c1cb","Type":"ContainerStarted","Data":"665ad2ba7cafcf3bcdeb7f7bee2a2febb28b3a2b59b2d708be62a40735172668"} Dec 03 00:11:21 crc kubenswrapper[4811]: I1203 00:11:21.741688 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5c6xs"] Dec 03 00:11:21 crc kubenswrapper[4811]: I1203 00:11:21.745161 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5c6xs" Dec 03 00:11:21 crc kubenswrapper[4811]: I1203 00:11:21.748361 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 00:11:21 crc kubenswrapper[4811]: I1203 00:11:21.754219 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5c6xs"] Dec 03 00:11:21 crc kubenswrapper[4811]: I1203 00:11:21.929499 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvchv\" (UniqueName: \"kubernetes.io/projected/1961ba8c-1b10-4d02-b842-0fe5be50900e-kube-api-access-tvchv\") pod \"community-operators-5c6xs\" (UID: \"1961ba8c-1b10-4d02-b842-0fe5be50900e\") " pod="openshift-marketplace/community-operators-5c6xs" Dec 03 00:11:21 crc kubenswrapper[4811]: I1203 00:11:21.929566 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1961ba8c-1b10-4d02-b842-0fe5be50900e-utilities\") pod \"community-operators-5c6xs\" (UID: \"1961ba8c-1b10-4d02-b842-0fe5be50900e\") " pod="openshift-marketplace/community-operators-5c6xs" Dec 03 00:11:21 crc kubenswrapper[4811]: I1203 00:11:21.929890 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1961ba8c-1b10-4d02-b842-0fe5be50900e-catalog-content\") pod \"community-operators-5c6xs\" (UID: \"1961ba8c-1b10-4d02-b842-0fe5be50900e\") " pod="openshift-marketplace/community-operators-5c6xs" Dec 03 00:11:21 crc kubenswrapper[4811]: I1203 00:11:21.939438 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lr4rt"] Dec 03 00:11:21 crc kubenswrapper[4811]: I1203 00:11:21.940622 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr4rt" Dec 03 00:11:21 crc kubenswrapper[4811]: I1203 00:11:21.953574 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 00:11:21 crc kubenswrapper[4811]: I1203 00:11:21.957404 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lr4rt"] Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.031067 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1961ba8c-1b10-4d02-b842-0fe5be50900e-catalog-content\") pod \"community-operators-5c6xs\" (UID: \"1961ba8c-1b10-4d02-b842-0fe5be50900e\") " pod="openshift-marketplace/community-operators-5c6xs" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.031173 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvchv\" (UniqueName: \"kubernetes.io/projected/1961ba8c-1b10-4d02-b842-0fe5be50900e-kube-api-access-tvchv\") pod \"community-operators-5c6xs\" (UID: \"1961ba8c-1b10-4d02-b842-0fe5be50900e\") " pod="openshift-marketplace/community-operators-5c6xs" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.031303 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1961ba8c-1b10-4d02-b842-0fe5be50900e-utilities\") pod \"community-operators-5c6xs\" (UID: \"1961ba8c-1b10-4d02-b842-0fe5be50900e\") " pod="openshift-marketplace/community-operators-5c6xs" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.032436 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1961ba8c-1b10-4d02-b842-0fe5be50900e-catalog-content\") pod \"community-operators-5c6xs\" (UID: \"1961ba8c-1b10-4d02-b842-0fe5be50900e\") " pod="openshift-marketplace/community-operators-5c6xs" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.032594 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1961ba8c-1b10-4d02-b842-0fe5be50900e-utilities\") pod \"community-operators-5c6xs\" (UID: \"1961ba8c-1b10-4d02-b842-0fe5be50900e\") " pod="openshift-marketplace/community-operators-5c6xs" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.050454 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvchv\" (UniqueName: \"kubernetes.io/projected/1961ba8c-1b10-4d02-b842-0fe5be50900e-kube-api-access-tvchv\") pod \"community-operators-5c6xs\" (UID: \"1961ba8c-1b10-4d02-b842-0fe5be50900e\") " pod="openshift-marketplace/community-operators-5c6xs" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.064774 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5c6xs" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.134743 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99jv7\" (UniqueName: \"kubernetes.io/projected/c3f5e37b-17ad-4570-82d9-03b680a5ff7c-kube-api-access-99jv7\") pod \"certified-operators-lr4rt\" (UID: \"c3f5e37b-17ad-4570-82d9-03b680a5ff7c\") " pod="openshift-marketplace/certified-operators-lr4rt" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.135253 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3f5e37b-17ad-4570-82d9-03b680a5ff7c-catalog-content\") pod \"certified-operators-lr4rt\" (UID: \"c3f5e37b-17ad-4570-82d9-03b680a5ff7c\") " pod="openshift-marketplace/certified-operators-lr4rt" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.135423 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3f5e37b-17ad-4570-82d9-03b680a5ff7c-utilities\") pod \"certified-operators-lr4rt\" (UID: \"c3f5e37b-17ad-4570-82d9-03b680a5ff7c\") " pod="openshift-marketplace/certified-operators-lr4rt" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.236502 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99jv7\" (UniqueName: \"kubernetes.io/projected/c3f5e37b-17ad-4570-82d9-03b680a5ff7c-kube-api-access-99jv7\") pod \"certified-operators-lr4rt\" (UID: \"c3f5e37b-17ad-4570-82d9-03b680a5ff7c\") " pod="openshift-marketplace/certified-operators-lr4rt" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.236578 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3f5e37b-17ad-4570-82d9-03b680a5ff7c-catalog-content\") pod \"certified-operators-lr4rt\" (UID: \"c3f5e37b-17ad-4570-82d9-03b680a5ff7c\") " pod="openshift-marketplace/certified-operators-lr4rt" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.236617 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3f5e37b-17ad-4570-82d9-03b680a5ff7c-utilities\") pod \"certified-operators-lr4rt\" (UID: \"c3f5e37b-17ad-4570-82d9-03b680a5ff7c\") " pod="openshift-marketplace/certified-operators-lr4rt" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.237708 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3f5e37b-17ad-4570-82d9-03b680a5ff7c-utilities\") pod \"certified-operators-lr4rt\" (UID: \"c3f5e37b-17ad-4570-82d9-03b680a5ff7c\") " pod="openshift-marketplace/certified-operators-lr4rt" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.237959 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3f5e37b-17ad-4570-82d9-03b680a5ff7c-catalog-content\") pod \"certified-operators-lr4rt\" (UID: \"c3f5e37b-17ad-4570-82d9-03b680a5ff7c\") " pod="openshift-marketplace/certified-operators-lr4rt" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.257050 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99jv7\" (UniqueName: \"kubernetes.io/projected/c3f5e37b-17ad-4570-82d9-03b680a5ff7c-kube-api-access-99jv7\") pod \"certified-operators-lr4rt\" (UID: \"c3f5e37b-17ad-4570-82d9-03b680a5ff7c\") " pod="openshift-marketplace/certified-operators-lr4rt" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.280903 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr4rt" Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.539809 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5c6xs"] Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.719211 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lr4rt"] Dec 03 00:11:22 crc kubenswrapper[4811]: W1203 00:11:22.773553 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3f5e37b_17ad_4570_82d9_03b680a5ff7c.slice/crio-e22936efab58eaef83f9e6344cd8456bbf495c121e83fbafd542e922e405e022 WatchSource:0}: Error finding container e22936efab58eaef83f9e6344cd8456bbf495c121e83fbafd542e922e405e022: Status 404 returned error can't find the container with id e22936efab58eaef83f9e6344cd8456bbf495c121e83fbafd542e922e405e022 Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.848874 4811 generic.go:334] "Generic (PLEG): container finished" podID="1961ba8c-1b10-4d02-b842-0fe5be50900e" containerID="a5442df1fe4009f79eb3de475d3799c5828ad3c337d68ee7301dc694e95f449c" exitCode=0 Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.848981 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c6xs" event={"ID":"1961ba8c-1b10-4d02-b842-0fe5be50900e","Type":"ContainerDied","Data":"a5442df1fe4009f79eb3de475d3799c5828ad3c337d68ee7301dc694e95f449c"} Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.849057 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c6xs" event={"ID":"1961ba8c-1b10-4d02-b842-0fe5be50900e","Type":"ContainerStarted","Data":"adbdabf1723517e4a786d553b7236bf8d404d1b9b74d7b0b80a61f3234d87632"} Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.850130 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr4rt" event={"ID":"c3f5e37b-17ad-4570-82d9-03b680a5ff7c","Type":"ContainerStarted","Data":"e22936efab58eaef83f9e6344cd8456bbf495c121e83fbafd542e922e405e022"} Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.851871 4811 generic.go:334] "Generic (PLEG): container finished" podID="b71b215c-c0c4-49e6-aa06-a4025a1fd22d" containerID="5bad260441bffc201d9b0cdcea075530bcfe4318f39788776dc24a048f7fdd08" exitCode=0 Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.851943 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kj29q" event={"ID":"b71b215c-c0c4-49e6-aa06-a4025a1fd22d","Type":"ContainerDied","Data":"5bad260441bffc201d9b0cdcea075530bcfe4318f39788776dc24a048f7fdd08"} Dec 03 00:11:22 crc kubenswrapper[4811]: I1203 00:11:22.855022 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7ftt" event={"ID":"69dda76c-6613-4f85-98cd-2597a053c1cb","Type":"ContainerStarted","Data":"f72363fcd22e299602ba93bcff80c57fe8603471ce4d017c6b70856512a8bf66"} Dec 03 00:11:23 crc kubenswrapper[4811]: I1203 00:11:23.862305 4811 generic.go:334] "Generic (PLEG): container finished" podID="c3f5e37b-17ad-4570-82d9-03b680a5ff7c" containerID="abccb1f2e4bb8966e21d0e9b976765003f7f2860b5d8e4049b826ce514d9a5e4" exitCode=0 Dec 03 00:11:23 crc kubenswrapper[4811]: I1203 00:11:23.862391 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr4rt" event={"ID":"c3f5e37b-17ad-4570-82d9-03b680a5ff7c","Type":"ContainerDied","Data":"abccb1f2e4bb8966e21d0e9b976765003f7f2860b5d8e4049b826ce514d9a5e4"} Dec 03 00:11:23 crc kubenswrapper[4811]: I1203 00:11:23.865304 4811 generic.go:334] "Generic (PLEG): container finished" podID="69dda76c-6613-4f85-98cd-2597a053c1cb" containerID="f72363fcd22e299602ba93bcff80c57fe8603471ce4d017c6b70856512a8bf66" exitCode=0 Dec 03 00:11:23 crc kubenswrapper[4811]: I1203 00:11:23.865349 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7ftt" event={"ID":"69dda76c-6613-4f85-98cd-2597a053c1cb","Type":"ContainerDied","Data":"f72363fcd22e299602ba93bcff80c57fe8603471ce4d017c6b70856512a8bf66"} Dec 03 00:11:29 crc kubenswrapper[4811]: I1203 00:11:29.905619 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c6xs" event={"ID":"1961ba8c-1b10-4d02-b842-0fe5be50900e","Type":"ContainerStarted","Data":"be4cc46d62de3256d5a8c797da8763d76b1a00e7b6e783f368ca4b1ce5152e0d"} Dec 03 00:11:29 crc kubenswrapper[4811]: I1203 00:11:29.909402 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kj29q" event={"ID":"b71b215c-c0c4-49e6-aa06-a4025a1fd22d","Type":"ContainerStarted","Data":"360ec8a29af5bf468375f879adf24b92d19bb9f953f90ae563eb950f8a89a5b4"} Dec 03 00:11:29 crc kubenswrapper[4811]: I1203 00:11:29.911301 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7ftt" event={"ID":"69dda76c-6613-4f85-98cd-2597a053c1cb","Type":"ContainerStarted","Data":"32dffe0925d232a618eaa7cd47b83430cf773a9e1cc322861a1b82a5e21542a1"} Dec 03 00:11:29 crc kubenswrapper[4811]: I1203 00:11:29.957883 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kj29q" podStartSLOduration=2.629561459 podStartE2EDuration="10.957853986s" podCreationTimestamp="2025-12-03 00:11:19 +0000 UTC" firstStartedPulling="2025-12-03 00:11:20.834213785 +0000 UTC m=+320.976043257" lastFinishedPulling="2025-12-03 00:11:29.162506312 +0000 UTC m=+329.304335784" observedRunningTime="2025-12-03 00:11:29.951230168 +0000 UTC m=+330.093059640" watchObservedRunningTime="2025-12-03 00:11:29.957853986 +0000 UTC m=+330.099683458" Dec 03 00:11:29 crc kubenswrapper[4811]: I1203 00:11:29.975992 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n7ftt" podStartSLOduration=2.571830521 podStartE2EDuration="10.975973398s" podCreationTimestamp="2025-12-03 00:11:19 +0000 UTC" firstStartedPulling="2025-12-03 00:11:20.839662821 +0000 UTC m=+320.981492293" lastFinishedPulling="2025-12-03 00:11:29.243805698 +0000 UTC m=+329.385635170" observedRunningTime="2025-12-03 00:11:29.974823981 +0000 UTC m=+330.116653463" watchObservedRunningTime="2025-12-03 00:11:29.975973398 +0000 UTC m=+330.117802870" Dec 03 00:11:30 crc kubenswrapper[4811]: I1203 00:11:30.919422 4811 generic.go:334] "Generic (PLEG): container finished" podID="1961ba8c-1b10-4d02-b842-0fe5be50900e" containerID="be4cc46d62de3256d5a8c797da8763d76b1a00e7b6e783f368ca4b1ce5152e0d" exitCode=0 Dec 03 00:11:30 crc kubenswrapper[4811]: I1203 00:11:30.919542 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c6xs" event={"ID":"1961ba8c-1b10-4d02-b842-0fe5be50900e","Type":"ContainerDied","Data":"be4cc46d62de3256d5a8c797da8763d76b1a00e7b6e783f368ca4b1ce5152e0d"} Dec 03 00:11:32 crc kubenswrapper[4811]: I1203 00:11:32.933501 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c6xs" event={"ID":"1961ba8c-1b10-4d02-b842-0fe5be50900e","Type":"ContainerStarted","Data":"2a3521374a085ce19f90dacf6077a7331066532915205d62f6c6834b82c3dbdc"} Dec 03 00:11:32 crc kubenswrapper[4811]: I1203 00:11:32.936024 4811 generic.go:334] "Generic (PLEG): container finished" podID="c3f5e37b-17ad-4570-82d9-03b680a5ff7c" containerID="414a603ce2028384650a4e33eef422fb425033879cd104740d150350a3dc80ce" exitCode=0 Dec 03 00:11:32 crc kubenswrapper[4811]: I1203 00:11:32.936083 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr4rt" event={"ID":"c3f5e37b-17ad-4570-82d9-03b680a5ff7c","Type":"ContainerDied","Data":"414a603ce2028384650a4e33eef422fb425033879cd104740d150350a3dc80ce"} Dec 03 00:11:32 crc kubenswrapper[4811]: I1203 00:11:32.955028 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5c6xs" podStartSLOduration=2.23125777 podStartE2EDuration="11.955007327s" podCreationTimestamp="2025-12-03 00:11:21 +0000 UTC" firstStartedPulling="2025-12-03 00:11:22.850924481 +0000 UTC m=+322.992753953" lastFinishedPulling="2025-12-03 00:11:32.574674038 +0000 UTC m=+332.716503510" observedRunningTime="2025-12-03 00:11:32.954124346 +0000 UTC m=+333.095953818" watchObservedRunningTime="2025-12-03 00:11:32.955007327 +0000 UTC m=+333.096836799" Dec 03 00:11:34 crc kubenswrapper[4811]: I1203 00:11:34.331762 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr4rt" event={"ID":"c3f5e37b-17ad-4570-82d9-03b680a5ff7c","Type":"ContainerStarted","Data":"2fbe34d712beaa9ce1cb010e6a894b86953abee812d9723098665260d05ad82a"} Dec 03 00:11:34 crc kubenswrapper[4811]: I1203 00:11:34.355932 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lr4rt" podStartSLOduration=3.714000312 podStartE2EDuration="13.355913875s" podCreationTimestamp="2025-12-03 00:11:21 +0000 UTC" firstStartedPulling="2025-12-03 00:11:23.864872897 +0000 UTC m=+324.006702359" lastFinishedPulling="2025-12-03 00:11:33.50678645 +0000 UTC m=+333.648615922" observedRunningTime="2025-12-03 00:11:34.35399703 +0000 UTC m=+334.495826512" watchObservedRunningTime="2025-12-03 00:11:34.355913875 +0000 UTC m=+334.497743347" Dec 03 00:11:39 crc kubenswrapper[4811]: I1203 00:11:39.668768 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n7ftt" Dec 03 00:11:39 crc kubenswrapper[4811]: I1203 00:11:39.669278 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n7ftt" Dec 03 00:11:39 crc kubenswrapper[4811]: I1203 00:11:39.727145 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n7ftt" Dec 03 00:11:39 crc kubenswrapper[4811]: I1203 00:11:39.868670 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:11:39 crc kubenswrapper[4811]: I1203 00:11:39.868771 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:11:39 crc kubenswrapper[4811]: I1203 00:11:39.919030 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:11:40 crc kubenswrapper[4811]: I1203 00:11:40.411391 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n7ftt" Dec 03 00:11:40 crc kubenswrapper[4811]: I1203 00:11:40.441785 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:11:42 crc kubenswrapper[4811]: I1203 00:11:42.065562 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5c6xs" Dec 03 00:11:42 crc kubenswrapper[4811]: I1203 00:11:42.065630 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5c6xs" Dec 03 00:11:42 crc kubenswrapper[4811]: I1203 00:11:42.125229 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5c6xs" Dec 03 00:11:42 crc kubenswrapper[4811]: I1203 00:11:42.281779 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lr4rt" Dec 03 00:11:42 crc kubenswrapper[4811]: I1203 00:11:42.281825 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lr4rt" Dec 03 00:11:42 crc kubenswrapper[4811]: I1203 00:11:42.339755 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lr4rt" Dec 03 00:11:42 crc kubenswrapper[4811]: I1203 00:11:42.412392 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5c6xs" Dec 03 00:11:42 crc kubenswrapper[4811]: I1203 00:11:42.423446 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lr4rt" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.026141 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sznxc"] Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.027579 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.042466 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sznxc"] Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.166526 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8ls8\" (UniqueName: \"kubernetes.io/projected/b3410e9e-e9f7-4d2f-8259-0ed541629c46-kube-api-access-l8ls8\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.166590 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b3410e9e-e9f7-4d2f-8259-0ed541629c46-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.166610 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3410e9e-e9f7-4d2f-8259-0ed541629c46-trusted-ca\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.166649 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b3410e9e-e9f7-4d2f-8259-0ed541629c46-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.166673 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3410e9e-e9f7-4d2f-8259-0ed541629c46-bound-sa-token\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.166775 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b3410e9e-e9f7-4d2f-8259-0ed541629c46-registry-certificates\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.166892 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.166998 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b3410e9e-e9f7-4d2f-8259-0ed541629c46-registry-tls\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.193826 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.267871 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b3410e9e-e9f7-4d2f-8259-0ed541629c46-registry-tls\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.268274 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8ls8\" (UniqueName: \"kubernetes.io/projected/b3410e9e-e9f7-4d2f-8259-0ed541629c46-kube-api-access-l8ls8\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.268316 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b3410e9e-e9f7-4d2f-8259-0ed541629c46-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.268340 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3410e9e-e9f7-4d2f-8259-0ed541629c46-trusted-ca\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.268386 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b3410e9e-e9f7-4d2f-8259-0ed541629c46-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.268416 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3410e9e-e9f7-4d2f-8259-0ed541629c46-bound-sa-token\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.268447 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b3410e9e-e9f7-4d2f-8259-0ed541629c46-registry-certificates\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.269544 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b3410e9e-e9f7-4d2f-8259-0ed541629c46-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.270369 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b3410e9e-e9f7-4d2f-8259-0ed541629c46-registry-certificates\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.270620 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3410e9e-e9f7-4d2f-8259-0ed541629c46-trusted-ca\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.278433 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b3410e9e-e9f7-4d2f-8259-0ed541629c46-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.285923 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b3410e9e-e9f7-4d2f-8259-0ed541629c46-registry-tls\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.289961 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3410e9e-e9f7-4d2f-8259-0ed541629c46-bound-sa-token\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.290374 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8ls8\" (UniqueName: \"kubernetes.io/projected/b3410e9e-e9f7-4d2f-8259-0ed541629c46-kube-api-access-l8ls8\") pod \"image-registry-66df7c8f76-sznxc\" (UID: \"b3410e9e-e9f7-4d2f-8259-0ed541629c46\") " pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.346330 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:48 crc kubenswrapper[4811]: I1203 00:11:48.795661 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sznxc"] Dec 03 00:11:49 crc kubenswrapper[4811]: I1203 00:11:49.413692 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" event={"ID":"b3410e9e-e9f7-4d2f-8259-0ed541629c46","Type":"ContainerStarted","Data":"cf58174dd9a59e37432831dd80dd81ccbfbe87866c58222a65454e5d967d21b5"} Dec 03 00:11:50 crc kubenswrapper[4811]: I1203 00:11:50.422353 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" event={"ID":"b3410e9e-e9f7-4d2f-8259-0ed541629c46","Type":"ContainerStarted","Data":"9242aaaca3bdf070ed00d7a671cb09fcfe1c5f5c3c7505649c69092e4a2678bc"} Dec 03 00:11:50 crc kubenswrapper[4811]: I1203 00:11:50.423588 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:11:50 crc kubenswrapper[4811]: I1203 00:11:50.442910 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" podStartSLOduration=2.442887427 podStartE2EDuration="2.442887427s" podCreationTimestamp="2025-12-03 00:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:11:50.439140238 +0000 UTC m=+350.580969710" watchObservedRunningTime="2025-12-03 00:11:50.442887427 +0000 UTC m=+350.584716909" Dec 03 00:12:02 crc kubenswrapper[4811]: I1203 00:12:02.940924 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:12:02 crc kubenswrapper[4811]: I1203 00:12:02.941430 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:12:08 crc kubenswrapper[4811]: I1203 00:12:08.357117 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-sznxc" Dec 03 00:12:08 crc kubenswrapper[4811]: I1203 00:12:08.429421 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cgwfj"] Dec 03 00:12:09 crc kubenswrapper[4811]: I1203 00:12:09.676162 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598844dbc9-gtjm6"] Dec 03 00:12:09 crc kubenswrapper[4811]: I1203 00:12:09.676921 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" podUID="781c6c8d-dcd0-41c7-90a8-c015fcb36e46" containerName="controller-manager" containerID="cri-o://997d101c566189bffab6e15574b47380dce22159d64c67b116513e629a2f2a53" gracePeriod=30 Dec 03 00:12:09 crc kubenswrapper[4811]: I1203 00:12:09.692721 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw"] Dec 03 00:12:09 crc kubenswrapper[4811]: I1203 00:12:09.693301 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" podUID="a4ea77ac-8a89-4ed0-bff5-aefac2874345" containerName="route-controller-manager" containerID="cri-o://b5974f3008d8413e8e87e4d6b0ea5440d4988815e65d8c1e36b638bf27a15edf" gracePeriod=30 Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.153850 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.158509 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.259087 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4ea77ac-8a89-4ed0-bff5-aefac2874345-client-ca\") pod \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.259137 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-proxy-ca-bundles\") pod \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.259195 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhfz7\" (UniqueName: \"kubernetes.io/projected/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-kube-api-access-nhfz7\") pod \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.259225 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vhc7\" (UniqueName: \"kubernetes.io/projected/a4ea77ac-8a89-4ed0-bff5-aefac2874345-kube-api-access-7vhc7\") pod \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.259248 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ea77ac-8a89-4ed0-bff5-aefac2874345-config\") pod \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.260279 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4ea77ac-8a89-4ed0-bff5-aefac2874345-serving-cert\") pod \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\" (UID: \"a4ea77ac-8a89-4ed0-bff5-aefac2874345\") " Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.260552 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-serving-cert\") pod \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.260254 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "781c6c8d-dcd0-41c7-90a8-c015fcb36e46" (UID: "781c6c8d-dcd0-41c7-90a8-c015fcb36e46"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.260278 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ea77ac-8a89-4ed0-bff5-aefac2874345-config" (OuterVolumeSpecName: "config") pod "a4ea77ac-8a89-4ed0-bff5-aefac2874345" (UID: "a4ea77ac-8a89-4ed0-bff5-aefac2874345"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.260581 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-config\") pod \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.260712 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-client-ca\") pod \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\" (UID: \"781c6c8d-dcd0-41c7-90a8-c015fcb36e46\") " Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.261167 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ea77ac-8a89-4ed0-bff5-aefac2874345-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.261182 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.261244 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-config" (OuterVolumeSpecName: "config") pod "781c6c8d-dcd0-41c7-90a8-c015fcb36e46" (UID: "781c6c8d-dcd0-41c7-90a8-c015fcb36e46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.261407 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ea77ac-8a89-4ed0-bff5-aefac2874345-client-ca" (OuterVolumeSpecName: "client-ca") pod "a4ea77ac-8a89-4ed0-bff5-aefac2874345" (UID: "a4ea77ac-8a89-4ed0-bff5-aefac2874345"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.261414 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-client-ca" (OuterVolumeSpecName: "client-ca") pod "781c6c8d-dcd0-41c7-90a8-c015fcb36e46" (UID: "781c6c8d-dcd0-41c7-90a8-c015fcb36e46"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.264717 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4ea77ac-8a89-4ed0-bff5-aefac2874345-kube-api-access-7vhc7" (OuterVolumeSpecName: "kube-api-access-7vhc7") pod "a4ea77ac-8a89-4ed0-bff5-aefac2874345" (UID: "a4ea77ac-8a89-4ed0-bff5-aefac2874345"). InnerVolumeSpecName "kube-api-access-7vhc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.264720 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ea77ac-8a89-4ed0-bff5-aefac2874345-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a4ea77ac-8a89-4ed0-bff5-aefac2874345" (UID: "a4ea77ac-8a89-4ed0-bff5-aefac2874345"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.264760 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-kube-api-access-nhfz7" (OuterVolumeSpecName: "kube-api-access-nhfz7") pod "781c6c8d-dcd0-41c7-90a8-c015fcb36e46" (UID: "781c6c8d-dcd0-41c7-90a8-c015fcb36e46"). InnerVolumeSpecName "kube-api-access-nhfz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.265034 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "781c6c8d-dcd0-41c7-90a8-c015fcb36e46" (UID: "781c6c8d-dcd0-41c7-90a8-c015fcb36e46"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.362374 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vhc7\" (UniqueName: \"kubernetes.io/projected/a4ea77ac-8a89-4ed0-bff5-aefac2874345-kube-api-access-7vhc7\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.362417 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4ea77ac-8a89-4ed0-bff5-aefac2874345-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.362432 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.362443 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.362455 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.362468 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4ea77ac-8a89-4ed0-bff5-aefac2874345-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.362481 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhfz7\" (UniqueName: \"kubernetes.io/projected/781c6c8d-dcd0-41c7-90a8-c015fcb36e46-kube-api-access-nhfz7\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.530911 4811 generic.go:334] "Generic (PLEG): container finished" podID="781c6c8d-dcd0-41c7-90a8-c015fcb36e46" containerID="997d101c566189bffab6e15574b47380dce22159d64c67b116513e629a2f2a53" exitCode=0 Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.530962 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.531031 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" event={"ID":"781c6c8d-dcd0-41c7-90a8-c015fcb36e46","Type":"ContainerDied","Data":"997d101c566189bffab6e15574b47380dce22159d64c67b116513e629a2f2a53"} Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.531082 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598844dbc9-gtjm6" event={"ID":"781c6c8d-dcd0-41c7-90a8-c015fcb36e46","Type":"ContainerDied","Data":"81d9944bdbcb012e203bda7fcdacc2c306e0afde71e9b510ebb535564af70927"} Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.531108 4811 scope.go:117] "RemoveContainer" containerID="997d101c566189bffab6e15574b47380dce22159d64c67b116513e629a2f2a53" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.533931 4811 generic.go:334] "Generic (PLEG): container finished" podID="a4ea77ac-8a89-4ed0-bff5-aefac2874345" containerID="b5974f3008d8413e8e87e4d6b0ea5440d4988815e65d8c1e36b638bf27a15edf" exitCode=0 Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.533998 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" event={"ID":"a4ea77ac-8a89-4ed0-bff5-aefac2874345","Type":"ContainerDied","Data":"b5974f3008d8413e8e87e4d6b0ea5440d4988815e65d8c1e36b638bf27a15edf"} Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.534045 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" event={"ID":"a4ea77ac-8a89-4ed0-bff5-aefac2874345","Type":"ContainerDied","Data":"1f094f5231d9826fe782f7d5b2d3853fa6d5af289cee13c0aff70b44cf3c6f45"} Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.534014 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.559678 4811 scope.go:117] "RemoveContainer" containerID="997d101c566189bffab6e15574b47380dce22159d64c67b116513e629a2f2a53" Dec 03 00:12:10 crc kubenswrapper[4811]: E1203 00:12:10.561061 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"997d101c566189bffab6e15574b47380dce22159d64c67b116513e629a2f2a53\": container with ID starting with 997d101c566189bffab6e15574b47380dce22159d64c67b116513e629a2f2a53 not found: ID does not exist" containerID="997d101c566189bffab6e15574b47380dce22159d64c67b116513e629a2f2a53" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.561131 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"997d101c566189bffab6e15574b47380dce22159d64c67b116513e629a2f2a53"} err="failed to get container status \"997d101c566189bffab6e15574b47380dce22159d64c67b116513e629a2f2a53\": rpc error: code = NotFound desc = could not find container \"997d101c566189bffab6e15574b47380dce22159d64c67b116513e629a2f2a53\": container with ID starting with 997d101c566189bffab6e15574b47380dce22159d64c67b116513e629a2f2a53 not found: ID does not exist" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.561175 4811 scope.go:117] "RemoveContainer" containerID="b5974f3008d8413e8e87e4d6b0ea5440d4988815e65d8c1e36b638bf27a15edf" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.563050 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598844dbc9-gtjm6"] Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.571588 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-598844dbc9-gtjm6"] Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.582990 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw"] Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.585026 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b45d8f5-v5bdw"] Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.586941 4811 scope.go:117] "RemoveContainer" containerID="b5974f3008d8413e8e87e4d6b0ea5440d4988815e65d8c1e36b638bf27a15edf" Dec 03 00:12:10 crc kubenswrapper[4811]: E1203 00:12:10.587919 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5974f3008d8413e8e87e4d6b0ea5440d4988815e65d8c1e36b638bf27a15edf\": container with ID starting with b5974f3008d8413e8e87e4d6b0ea5440d4988815e65d8c1e36b638bf27a15edf not found: ID does not exist" containerID="b5974f3008d8413e8e87e4d6b0ea5440d4988815e65d8c1e36b638bf27a15edf" Dec 03 00:12:10 crc kubenswrapper[4811]: I1203 00:12:10.587960 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5974f3008d8413e8e87e4d6b0ea5440d4988815e65d8c1e36b638bf27a15edf"} err="failed to get container status \"b5974f3008d8413e8e87e4d6b0ea5440d4988815e65d8c1e36b638bf27a15edf\": rpc error: code = NotFound desc = could not find container \"b5974f3008d8413e8e87e4d6b0ea5440d4988815e65d8c1e36b638bf27a15edf\": container with ID starting with b5974f3008d8413e8e87e4d6b0ea5440d4988815e65d8c1e36b638bf27a15edf not found: ID does not exist" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.379088 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m"] Dec 03 00:12:11 crc kubenswrapper[4811]: E1203 00:12:11.379455 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781c6c8d-dcd0-41c7-90a8-c015fcb36e46" containerName="controller-manager" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.379477 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="781c6c8d-dcd0-41c7-90a8-c015fcb36e46" containerName="controller-manager" Dec 03 00:12:11 crc kubenswrapper[4811]: E1203 00:12:11.379502 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ea77ac-8a89-4ed0-bff5-aefac2874345" containerName="route-controller-manager" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.379511 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ea77ac-8a89-4ed0-bff5-aefac2874345" containerName="route-controller-manager" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.379661 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ea77ac-8a89-4ed0-bff5-aefac2874345" containerName="route-controller-manager" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.379690 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="781c6c8d-dcd0-41c7-90a8-c015fcb36e46" containerName="controller-manager" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.380353 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.382925 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.383674 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.383921 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.384112 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.384576 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.390115 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.399130 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-796b8686b6-p2fkz"] Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.400447 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.404559 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.404595 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.405331 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.405394 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.405656 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.406180 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.415696 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m"] Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.420403 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.424586 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-796b8686b6-p2fkz"] Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.579562 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a572b7c5-f5ce-4385-999b-7af78d247303-proxy-ca-bundles\") pod \"controller-manager-796b8686b6-p2fkz\" (UID: \"a572b7c5-f5ce-4385-999b-7af78d247303\") " pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.579657 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm6kz\" (UniqueName: \"kubernetes.io/projected/12832f32-31ec-492d-a96d-693d12f97676-kube-api-access-dm6kz\") pod \"route-controller-manager-58fdc6fc78-95r9m\" (UID: \"12832f32-31ec-492d-a96d-693d12f97676\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.580824 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljz9z\" (UniqueName: \"kubernetes.io/projected/a572b7c5-f5ce-4385-999b-7af78d247303-kube-api-access-ljz9z\") pod \"controller-manager-796b8686b6-p2fkz\" (UID: \"a572b7c5-f5ce-4385-999b-7af78d247303\") " pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.581292 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12832f32-31ec-492d-a96d-693d12f97676-serving-cert\") pod \"route-controller-manager-58fdc6fc78-95r9m\" (UID: \"12832f32-31ec-492d-a96d-693d12f97676\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.581359 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12832f32-31ec-492d-a96d-693d12f97676-config\") pod \"route-controller-manager-58fdc6fc78-95r9m\" (UID: \"12832f32-31ec-492d-a96d-693d12f97676\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.581385 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a572b7c5-f5ce-4385-999b-7af78d247303-serving-cert\") pod \"controller-manager-796b8686b6-p2fkz\" (UID: \"a572b7c5-f5ce-4385-999b-7af78d247303\") " pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.581489 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a572b7c5-f5ce-4385-999b-7af78d247303-client-ca\") pod \"controller-manager-796b8686b6-p2fkz\" (UID: \"a572b7c5-f5ce-4385-999b-7af78d247303\") " pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.581523 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a572b7c5-f5ce-4385-999b-7af78d247303-config\") pod \"controller-manager-796b8686b6-p2fkz\" (UID: \"a572b7c5-f5ce-4385-999b-7af78d247303\") " pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.581558 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12832f32-31ec-492d-a96d-693d12f97676-client-ca\") pod \"route-controller-manager-58fdc6fc78-95r9m\" (UID: \"12832f32-31ec-492d-a96d-693d12f97676\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.682321 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12832f32-31ec-492d-a96d-693d12f97676-client-ca\") pod \"route-controller-manager-58fdc6fc78-95r9m\" (UID: \"12832f32-31ec-492d-a96d-693d12f97676\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.682434 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a572b7c5-f5ce-4385-999b-7af78d247303-proxy-ca-bundles\") pod \"controller-manager-796b8686b6-p2fkz\" (UID: \"a572b7c5-f5ce-4385-999b-7af78d247303\") " pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.682476 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm6kz\" (UniqueName: \"kubernetes.io/projected/12832f32-31ec-492d-a96d-693d12f97676-kube-api-access-dm6kz\") pod \"route-controller-manager-58fdc6fc78-95r9m\" (UID: \"12832f32-31ec-492d-a96d-693d12f97676\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.682530 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljz9z\" (UniqueName: \"kubernetes.io/projected/a572b7c5-f5ce-4385-999b-7af78d247303-kube-api-access-ljz9z\") pod \"controller-manager-796b8686b6-p2fkz\" (UID: \"a572b7c5-f5ce-4385-999b-7af78d247303\") " pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.682554 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12832f32-31ec-492d-a96d-693d12f97676-serving-cert\") pod \"route-controller-manager-58fdc6fc78-95r9m\" (UID: \"12832f32-31ec-492d-a96d-693d12f97676\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.682588 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12832f32-31ec-492d-a96d-693d12f97676-config\") pod \"route-controller-manager-58fdc6fc78-95r9m\" (UID: \"12832f32-31ec-492d-a96d-693d12f97676\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.682615 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a572b7c5-f5ce-4385-999b-7af78d247303-serving-cert\") pod \"controller-manager-796b8686b6-p2fkz\" (UID: \"a572b7c5-f5ce-4385-999b-7af78d247303\") " pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.682655 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a572b7c5-f5ce-4385-999b-7af78d247303-client-ca\") pod \"controller-manager-796b8686b6-p2fkz\" (UID: \"a572b7c5-f5ce-4385-999b-7af78d247303\") " pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.682690 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a572b7c5-f5ce-4385-999b-7af78d247303-config\") pod \"controller-manager-796b8686b6-p2fkz\" (UID: \"a572b7c5-f5ce-4385-999b-7af78d247303\") " pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.684043 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12832f32-31ec-492d-a96d-693d12f97676-client-ca\") pod \"route-controller-manager-58fdc6fc78-95r9m\" (UID: \"12832f32-31ec-492d-a96d-693d12f97676\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.684736 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a572b7c5-f5ce-4385-999b-7af78d247303-config\") pod \"controller-manager-796b8686b6-p2fkz\" (UID: \"a572b7c5-f5ce-4385-999b-7af78d247303\") " pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.685123 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a572b7c5-f5ce-4385-999b-7af78d247303-client-ca\") pod \"controller-manager-796b8686b6-p2fkz\" (UID: \"a572b7c5-f5ce-4385-999b-7af78d247303\") " pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.685351 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a572b7c5-f5ce-4385-999b-7af78d247303-proxy-ca-bundles\") pod \"controller-manager-796b8686b6-p2fkz\" (UID: \"a572b7c5-f5ce-4385-999b-7af78d247303\") " pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.685567 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12832f32-31ec-492d-a96d-693d12f97676-config\") pod \"route-controller-manager-58fdc6fc78-95r9m\" (UID: \"12832f32-31ec-492d-a96d-693d12f97676\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.689745 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a572b7c5-f5ce-4385-999b-7af78d247303-serving-cert\") pod \"controller-manager-796b8686b6-p2fkz\" (UID: \"a572b7c5-f5ce-4385-999b-7af78d247303\") " pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.689788 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12832f32-31ec-492d-a96d-693d12f97676-serving-cert\") pod \"route-controller-manager-58fdc6fc78-95r9m\" (UID: \"12832f32-31ec-492d-a96d-693d12f97676\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.711758 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm6kz\" (UniqueName: \"kubernetes.io/projected/12832f32-31ec-492d-a96d-693d12f97676-kube-api-access-dm6kz\") pod \"route-controller-manager-58fdc6fc78-95r9m\" (UID: \"12832f32-31ec-492d-a96d-693d12f97676\") " pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.713987 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljz9z\" (UniqueName: \"kubernetes.io/projected/a572b7c5-f5ce-4385-999b-7af78d247303-kube-api-access-ljz9z\") pod \"controller-manager-796b8686b6-p2fkz\" (UID: \"a572b7c5-f5ce-4385-999b-7af78d247303\") " pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:11 crc kubenswrapper[4811]: I1203 00:12:11.728318 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:12 crc kubenswrapper[4811]: I1203 00:12:12.006835 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:12 crc kubenswrapper[4811]: I1203 00:12:12.010332 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-796b8686b6-p2fkz"] Dec 03 00:12:12 crc kubenswrapper[4811]: I1203 00:12:12.122622 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="781c6c8d-dcd0-41c7-90a8-c015fcb36e46" path="/var/lib/kubelet/pods/781c6c8d-dcd0-41c7-90a8-c015fcb36e46/volumes" Dec 03 00:12:12 crc kubenswrapper[4811]: I1203 00:12:12.123424 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4ea77ac-8a89-4ed0-bff5-aefac2874345" path="/var/lib/kubelet/pods/a4ea77ac-8a89-4ed0-bff5-aefac2874345/volumes" Dec 03 00:12:12 crc kubenswrapper[4811]: I1203 00:12:12.435526 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m"] Dec 03 00:12:12 crc kubenswrapper[4811]: I1203 00:12:12.547836 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" event={"ID":"a572b7c5-f5ce-4385-999b-7af78d247303","Type":"ContainerStarted","Data":"9574606dac37073261cd7618c871156226a9009f2f31c861e4eb891046d9bc8b"} Dec 03 00:12:12 crc kubenswrapper[4811]: I1203 00:12:12.547890 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" event={"ID":"a572b7c5-f5ce-4385-999b-7af78d247303","Type":"ContainerStarted","Data":"1365360fb668d64a0d0d5c45c3c743d49500e08f284037655cbd63b18c16c403"} Dec 03 00:12:12 crc kubenswrapper[4811]: I1203 00:12:12.548221 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:12 crc kubenswrapper[4811]: I1203 00:12:12.553397 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" Dec 03 00:12:12 crc kubenswrapper[4811]: I1203 00:12:12.553440 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" event={"ID":"12832f32-31ec-492d-a96d-693d12f97676","Type":"ContainerStarted","Data":"ed89fbfa53b0a9739d2d77adb59cb5c1e0e9924b45f97019e0fced7880352a35"} Dec 03 00:12:12 crc kubenswrapper[4811]: I1203 00:12:12.553459 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" event={"ID":"12832f32-31ec-492d-a96d-693d12f97676","Type":"ContainerStarted","Data":"ef5243619119e61507e1fe1aca6d0ccaf885e014b1a3369763697990ade6d949"} Dec 03 00:12:12 crc kubenswrapper[4811]: I1203 00:12:12.553687 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:12 crc kubenswrapper[4811]: I1203 00:12:12.555084 4811 patch_prober.go:28] interesting pod/route-controller-manager-58fdc6fc78-95r9m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Dec 03 00:12:12 crc kubenswrapper[4811]: I1203 00:12:12.555145 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" podUID="12832f32-31ec-492d-a96d-693d12f97676" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Dec 03 00:12:12 crc kubenswrapper[4811]: I1203 00:12:12.566078 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-796b8686b6-p2fkz" podStartSLOduration=3.566056037 podStartE2EDuration="3.566056037s" podCreationTimestamp="2025-12-03 00:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:12:12.563464585 +0000 UTC m=+372.705294067" watchObservedRunningTime="2025-12-03 00:12:12.566056037 +0000 UTC m=+372.707885499" Dec 03 00:12:13 crc kubenswrapper[4811]: I1203 00:12:13.563227 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" Dec 03 00:12:13 crc kubenswrapper[4811]: I1203 00:12:13.582542 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58fdc6fc78-95r9m" podStartSLOduration=4.582522249 podStartE2EDuration="4.582522249s" podCreationTimestamp="2025-12-03 00:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:12:12.602991566 +0000 UTC m=+372.744821048" watchObservedRunningTime="2025-12-03 00:12:13.582522249 +0000 UTC m=+373.724351721" Dec 03 00:12:32 crc kubenswrapper[4811]: I1203 00:12:32.942533 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:12:32 crc kubenswrapper[4811]: I1203 00:12:32.943149 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:12:33 crc kubenswrapper[4811]: I1203 00:12:33.469216 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" podUID="71d18873-465e-4bc9-aca1-149975060eff" containerName="registry" containerID="cri-o://b26b6b30e2a2f5c71d2e13f8307836d7752db02eb13ee5aefd998b60c8ca22e5" gracePeriod=30 Dec 03 00:12:33 crc kubenswrapper[4811]: I1203 00:12:33.690511 4811 generic.go:334] "Generic (PLEG): container finished" podID="71d18873-465e-4bc9-aca1-149975060eff" containerID="b26b6b30e2a2f5c71d2e13f8307836d7752db02eb13ee5aefd998b60c8ca22e5" exitCode=0 Dec 03 00:12:33 crc kubenswrapper[4811]: I1203 00:12:33.690579 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" event={"ID":"71d18873-465e-4bc9-aca1-149975060eff","Type":"ContainerDied","Data":"b26b6b30e2a2f5c71d2e13f8307836d7752db02eb13ee5aefd998b60c8ca22e5"} Dec 03 00:12:33 crc kubenswrapper[4811]: I1203 00:12:33.932408 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.066988 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdp4n\" (UniqueName: \"kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-kube-api-access-xdp4n\") pod \"71d18873-465e-4bc9-aca1-149975060eff\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.067040 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/71d18873-465e-4bc9-aca1-149975060eff-installation-pull-secrets\") pod \"71d18873-465e-4bc9-aca1-149975060eff\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.067062 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/71d18873-465e-4bc9-aca1-149975060eff-registry-certificates\") pod \"71d18873-465e-4bc9-aca1-149975060eff\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.067087 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-bound-sa-token\") pod \"71d18873-465e-4bc9-aca1-149975060eff\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.067134 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-registry-tls\") pod \"71d18873-465e-4bc9-aca1-149975060eff\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.067935 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71d18873-465e-4bc9-aca1-149975060eff-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "71d18873-465e-4bc9-aca1-149975060eff" (UID: "71d18873-465e-4bc9-aca1-149975060eff"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.068068 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"71d18873-465e-4bc9-aca1-149975060eff\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.068120 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71d18873-465e-4bc9-aca1-149975060eff-trusted-ca\") pod \"71d18873-465e-4bc9-aca1-149975060eff\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.068172 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/71d18873-465e-4bc9-aca1-149975060eff-ca-trust-extracted\") pod \"71d18873-465e-4bc9-aca1-149975060eff\" (UID: \"71d18873-465e-4bc9-aca1-149975060eff\") " Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.068435 4811 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/71d18873-465e-4bc9-aca1-149975060eff-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.068701 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71d18873-465e-4bc9-aca1-149975060eff-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "71d18873-465e-4bc9-aca1-149975060eff" (UID: "71d18873-465e-4bc9-aca1-149975060eff"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.072967 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-kube-api-access-xdp4n" (OuterVolumeSpecName: "kube-api-access-xdp4n") pod "71d18873-465e-4bc9-aca1-149975060eff" (UID: "71d18873-465e-4bc9-aca1-149975060eff"). InnerVolumeSpecName "kube-api-access-xdp4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.073375 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71d18873-465e-4bc9-aca1-149975060eff-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "71d18873-465e-4bc9-aca1-149975060eff" (UID: "71d18873-465e-4bc9-aca1-149975060eff"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.073432 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "71d18873-465e-4bc9-aca1-149975060eff" (UID: "71d18873-465e-4bc9-aca1-149975060eff"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.074833 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "71d18873-465e-4bc9-aca1-149975060eff" (UID: "71d18873-465e-4bc9-aca1-149975060eff"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.085849 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71d18873-465e-4bc9-aca1-149975060eff-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "71d18873-465e-4bc9-aca1-149975060eff" (UID: "71d18873-465e-4bc9-aca1-149975060eff"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.087561 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "71d18873-465e-4bc9-aca1-149975060eff" (UID: "71d18873-465e-4bc9-aca1-149975060eff"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.169437 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71d18873-465e-4bc9-aca1-149975060eff-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.169479 4811 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/71d18873-465e-4bc9-aca1-149975060eff-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.169491 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdp4n\" (UniqueName: \"kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-kube-api-access-xdp4n\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.169502 4811 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/71d18873-465e-4bc9-aca1-149975060eff-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.169511 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.169520 4811 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/71d18873-465e-4bc9-aca1-149975060eff-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.698565 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" event={"ID":"71d18873-465e-4bc9-aca1-149975060eff","Type":"ContainerDied","Data":"dacfb284d444cbfef7a00639d6929d48a64a46301354a118a826fae482626745"} Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.698627 4811 scope.go:117] "RemoveContainer" containerID="b26b6b30e2a2f5c71d2e13f8307836d7752db02eb13ee5aefd998b60c8ca22e5" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.698778 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cgwfj" Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.718784 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cgwfj"] Dec 03 00:12:34 crc kubenswrapper[4811]: I1203 00:12:34.725397 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cgwfj"] Dec 03 00:12:36 crc kubenswrapper[4811]: I1203 00:12:36.123145 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71d18873-465e-4bc9-aca1-149975060eff" path="/var/lib/kubelet/pods/71d18873-465e-4bc9-aca1-149975060eff/volumes" Dec 03 00:13:02 crc kubenswrapper[4811]: I1203 00:13:02.940638 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:13:02 crc kubenswrapper[4811]: I1203 00:13:02.941227 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:13:02 crc kubenswrapper[4811]: I1203 00:13:02.941374 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:13:02 crc kubenswrapper[4811]: I1203 00:13:02.942359 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10561c3fa5ec63e76b89f65f6adfa64f4786ff83527fb29ebb98d13b1546c538"} pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:13:02 crc kubenswrapper[4811]: I1203 00:13:02.942492 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" containerID="cri-o://10561c3fa5ec63e76b89f65f6adfa64f4786ff83527fb29ebb98d13b1546c538" gracePeriod=600 Dec 03 00:13:03 crc kubenswrapper[4811]: I1203 00:13:03.891182 4811 generic.go:334] "Generic (PLEG): container finished" podID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerID="10561c3fa5ec63e76b89f65f6adfa64f4786ff83527fb29ebb98d13b1546c538" exitCode=0 Dec 03 00:13:03 crc kubenswrapper[4811]: I1203 00:13:03.891324 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerDied","Data":"10561c3fa5ec63e76b89f65f6adfa64f4786ff83527fb29ebb98d13b1546c538"} Dec 03 00:13:03 crc kubenswrapper[4811]: I1203 00:13:03.892098 4811 scope.go:117] "RemoveContainer" containerID="84203cab17265bf4c5b23a0adc9e642b29cdaa060a117d5429543cea297eac25" Dec 03 00:13:03 crc kubenswrapper[4811]: I1203 00:13:03.891860 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerStarted","Data":"7fbc3e78d8acc5df7781522124d991f4e42780ce8b0fd9b01a7c2846f764d716"} Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.194696 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94"] Dec 03 00:15:00 crc kubenswrapper[4811]: E1203 00:15:00.195762 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71d18873-465e-4bc9-aca1-149975060eff" containerName="registry" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.195780 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="71d18873-465e-4bc9-aca1-149975060eff" containerName="registry" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.195942 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="71d18873-465e-4bc9-aca1-149975060eff" containerName="registry" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.196465 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.200682 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.201117 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.205323 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94"] Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.219930 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/949db1aa-c2b1-4e04-aa37-22399cf1f103-secret-volume\") pod \"collect-profiles-29412015-lfk94\" (UID: \"949db1aa-c2b1-4e04-aa37-22399cf1f103\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.220017 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/949db1aa-c2b1-4e04-aa37-22399cf1f103-config-volume\") pod \"collect-profiles-29412015-lfk94\" (UID: \"949db1aa-c2b1-4e04-aa37-22399cf1f103\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.220117 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhj2x\" (UniqueName: \"kubernetes.io/projected/949db1aa-c2b1-4e04-aa37-22399cf1f103-kube-api-access-fhj2x\") pod \"collect-profiles-29412015-lfk94\" (UID: \"949db1aa-c2b1-4e04-aa37-22399cf1f103\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.321473 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhj2x\" (UniqueName: \"kubernetes.io/projected/949db1aa-c2b1-4e04-aa37-22399cf1f103-kube-api-access-fhj2x\") pod \"collect-profiles-29412015-lfk94\" (UID: \"949db1aa-c2b1-4e04-aa37-22399cf1f103\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.321586 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/949db1aa-c2b1-4e04-aa37-22399cf1f103-secret-volume\") pod \"collect-profiles-29412015-lfk94\" (UID: \"949db1aa-c2b1-4e04-aa37-22399cf1f103\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.321658 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/949db1aa-c2b1-4e04-aa37-22399cf1f103-config-volume\") pod \"collect-profiles-29412015-lfk94\" (UID: \"949db1aa-c2b1-4e04-aa37-22399cf1f103\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.323292 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/949db1aa-c2b1-4e04-aa37-22399cf1f103-config-volume\") pod \"collect-profiles-29412015-lfk94\" (UID: \"949db1aa-c2b1-4e04-aa37-22399cf1f103\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.331086 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/949db1aa-c2b1-4e04-aa37-22399cf1f103-secret-volume\") pod \"collect-profiles-29412015-lfk94\" (UID: \"949db1aa-c2b1-4e04-aa37-22399cf1f103\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.339568 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhj2x\" (UniqueName: \"kubernetes.io/projected/949db1aa-c2b1-4e04-aa37-22399cf1f103-kube-api-access-fhj2x\") pod \"collect-profiles-29412015-lfk94\" (UID: \"949db1aa-c2b1-4e04-aa37-22399cf1f103\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.520382 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" Dec 03 00:15:00 crc kubenswrapper[4811]: I1203 00:15:00.725907 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94"] Dec 03 00:15:01 crc kubenswrapper[4811]: I1203 00:15:01.647691 4811 generic.go:334] "Generic (PLEG): container finished" podID="949db1aa-c2b1-4e04-aa37-22399cf1f103" containerID="4e6ab1c065e5be3edf690b5906912da50516e148ffd3dbb29e6bd7d826fdb1c6" exitCode=0 Dec 03 00:15:01 crc kubenswrapper[4811]: I1203 00:15:01.647838 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" event={"ID":"949db1aa-c2b1-4e04-aa37-22399cf1f103","Type":"ContainerDied","Data":"4e6ab1c065e5be3edf690b5906912da50516e148ffd3dbb29e6bd7d826fdb1c6"} Dec 03 00:15:01 crc kubenswrapper[4811]: I1203 00:15:01.648178 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" event={"ID":"949db1aa-c2b1-4e04-aa37-22399cf1f103","Type":"ContainerStarted","Data":"94393e7be3565faf4f52863ba69002eafc02047c963a3d3098caf9805daab55d"} Dec 03 00:15:02 crc kubenswrapper[4811]: I1203 00:15:02.862431 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" Dec 03 00:15:03 crc kubenswrapper[4811]: I1203 00:15:03.052851 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhj2x\" (UniqueName: \"kubernetes.io/projected/949db1aa-c2b1-4e04-aa37-22399cf1f103-kube-api-access-fhj2x\") pod \"949db1aa-c2b1-4e04-aa37-22399cf1f103\" (UID: \"949db1aa-c2b1-4e04-aa37-22399cf1f103\") " Dec 03 00:15:03 crc kubenswrapper[4811]: I1203 00:15:03.052948 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/949db1aa-c2b1-4e04-aa37-22399cf1f103-config-volume\") pod \"949db1aa-c2b1-4e04-aa37-22399cf1f103\" (UID: \"949db1aa-c2b1-4e04-aa37-22399cf1f103\") " Dec 03 00:15:03 crc kubenswrapper[4811]: I1203 00:15:03.053042 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/949db1aa-c2b1-4e04-aa37-22399cf1f103-secret-volume\") pod \"949db1aa-c2b1-4e04-aa37-22399cf1f103\" (UID: \"949db1aa-c2b1-4e04-aa37-22399cf1f103\") " Dec 03 00:15:03 crc kubenswrapper[4811]: I1203 00:15:03.054146 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/949db1aa-c2b1-4e04-aa37-22399cf1f103-config-volume" (OuterVolumeSpecName: "config-volume") pod "949db1aa-c2b1-4e04-aa37-22399cf1f103" (UID: "949db1aa-c2b1-4e04-aa37-22399cf1f103"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:15:03 crc kubenswrapper[4811]: I1203 00:15:03.058302 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949db1aa-c2b1-4e04-aa37-22399cf1f103-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "949db1aa-c2b1-4e04-aa37-22399cf1f103" (UID: "949db1aa-c2b1-4e04-aa37-22399cf1f103"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:15:03 crc kubenswrapper[4811]: I1203 00:15:03.058626 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949db1aa-c2b1-4e04-aa37-22399cf1f103-kube-api-access-fhj2x" (OuterVolumeSpecName: "kube-api-access-fhj2x") pod "949db1aa-c2b1-4e04-aa37-22399cf1f103" (UID: "949db1aa-c2b1-4e04-aa37-22399cf1f103"). InnerVolumeSpecName "kube-api-access-fhj2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:15:03 crc kubenswrapper[4811]: I1203 00:15:03.155608 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/949db1aa-c2b1-4e04-aa37-22399cf1f103-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:03 crc kubenswrapper[4811]: I1203 00:15:03.155666 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhj2x\" (UniqueName: \"kubernetes.io/projected/949db1aa-c2b1-4e04-aa37-22399cf1f103-kube-api-access-fhj2x\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:03 crc kubenswrapper[4811]: I1203 00:15:03.155680 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/949db1aa-c2b1-4e04-aa37-22399cf1f103-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:03 crc kubenswrapper[4811]: I1203 00:15:03.658933 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" event={"ID":"949db1aa-c2b1-4e04-aa37-22399cf1f103","Type":"ContainerDied","Data":"94393e7be3565faf4f52863ba69002eafc02047c963a3d3098caf9805daab55d"} Dec 03 00:15:03 crc kubenswrapper[4811]: I1203 00:15:03.658989 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94393e7be3565faf4f52863ba69002eafc02047c963a3d3098caf9805daab55d" Dec 03 00:15:03 crc kubenswrapper[4811]: I1203 00:15:03.659077 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-lfk94" Dec 03 00:15:32 crc kubenswrapper[4811]: I1203 00:15:32.940009 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:15:32 crc kubenswrapper[4811]: I1203 00:15:32.940778 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.033618 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mjj8p"] Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.034640 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovn-controller" containerID="cri-o://d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85" gracePeriod=30 Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.034790 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4" gracePeriod=30 Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.034852 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="kube-rbac-proxy-node" containerID="cri-o://5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f" gracePeriod=30 Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.034948 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovn-acl-logging" containerID="cri-o://bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38" gracePeriod=30 Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.034811 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="northd" containerID="cri-o://9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae" gracePeriod=30 Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.035139 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="sbdb" containerID="cri-o://385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9" gracePeriod=30 Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.035163 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="nbdb" containerID="cri-o://11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272" gracePeriod=30 Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.090808 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" containerID="cri-o://7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39" gracePeriod=30 Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.337121 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/3.log" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.339027 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovn-acl-logging/0.log" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.339585 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovn-controller/0.log" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.340209 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.397659 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5zbtd"] Dec 03 00:15:52 crc kubenswrapper[4811]: E1203 00:15:52.397928 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="nbdb" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.397942 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="nbdb" Dec 03 00:15:52 crc kubenswrapper[4811]: E1203 00:15:52.397959 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.397967 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: E1203 00:15:52.397977 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="kube-rbac-proxy-node" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.397985 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="kube-rbac-proxy-node" Dec 03 00:15:52 crc kubenswrapper[4811]: E1203 00:15:52.397993 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949db1aa-c2b1-4e04-aa37-22399cf1f103" containerName="collect-profiles" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398000 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="949db1aa-c2b1-4e04-aa37-22399cf1f103" containerName="collect-profiles" Dec 03 00:15:52 crc kubenswrapper[4811]: E1203 00:15:52.398012 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="kubecfg-setup" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398020 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="kubecfg-setup" Dec 03 00:15:52 crc kubenswrapper[4811]: E1203 00:15:52.398027 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398032 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: E1203 00:15:52.398042 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovn-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398048 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovn-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: E1203 00:15:52.398055 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="northd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398074 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="northd" Dec 03 00:15:52 crc kubenswrapper[4811]: E1203 00:15:52.398086 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398091 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 00:15:52 crc kubenswrapper[4811]: E1203 00:15:52.398101 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovn-acl-logging" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398107 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovn-acl-logging" Dec 03 00:15:52 crc kubenswrapper[4811]: E1203 00:15:52.398117 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="sbdb" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398122 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="sbdb" Dec 03 00:15:52 crc kubenswrapper[4811]: E1203 00:15:52.398132 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398138 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398227 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="sbdb" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398240 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="northd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398247 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398280 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398291 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="949db1aa-c2b1-4e04-aa37-22399cf1f103" containerName="collect-profiles" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398299 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovn-acl-logging" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398306 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398313 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398321 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovn-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398328 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="nbdb" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398334 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398341 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="kube-rbac-proxy-node" Dec 03 00:15:52 crc kubenswrapper[4811]: E1203 00:15:52.398462 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398469 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: E1203 00:15:52.398477 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398484 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.398599 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerName="ovnkube-controller" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.400131 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.410720 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-run-ovn-kubernetes\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.410762 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-var-lib-openvswitch\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.410802 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-cni-bin\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.410826 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-cni-netd\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.410863 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovnkube-config\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.410884 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovnkube-script-lib\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.410875 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.410906 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-node-log\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.410970 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-node-log" (OuterVolumeSpecName: "node-log") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411013 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411035 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411049 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-env-overrides\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411090 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-openvswitch\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411134 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-systemd\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411195 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-systemd-units\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411227 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-slash\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411289 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-etc-openvswitch\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411322 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-kubelet\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411362 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovn-node-metrics-cert\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411404 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms7q7\" (UniqueName: \"kubernetes.io/projected/3e8d9251-ed38-4134-b62e-f9a34bf4c755-kube-api-access-ms7q7\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411461 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-run-netns\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411487 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-ovn\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411510 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-log-socket\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411553 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\" (UID: \"3e8d9251-ed38-4134-b62e-f9a34bf4c755\") " Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411765 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-log-socket\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411828 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/37723039-70ec-42a0-91ca-7f3a7513f889-ovn-node-metrics-cert\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411855 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-slash\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411892 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/37723039-70ec-42a0-91ca-7f3a7513f889-env-overrides\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411931 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/37723039-70ec-42a0-91ca-7f3a7513f889-ovnkube-config\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411972 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-systemd-units\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411999 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-run-openvswitch\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412051 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-run-netns\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412078 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/37723039-70ec-42a0-91ca-7f3a7513f889-ovnkube-script-lib\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412110 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-run-systemd\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412128 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-run-ovn-kubernetes\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412173 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-cni-netd\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412195 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412231 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-var-lib-openvswitch\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412281 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-etc-openvswitch\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412321 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq47l\" (UniqueName: \"kubernetes.io/projected/37723039-70ec-42a0-91ca-7f3a7513f889-kube-api-access-zq47l\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412381 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-node-log\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411053 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411842 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.411878 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412092 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412293 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412288 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412370 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-slash" (OuterVolumeSpecName: "host-slash") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412391 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412414 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-cni-bin\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412627 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-kubelet\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412654 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-run-ovn\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412815 4811 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412831 4811 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412842 4811 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-slash\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412854 4811 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412865 4811 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412879 4811 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412892 4811 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412902 4811 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412911 4811 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412923 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412934 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.412943 4811 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-node-log\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.413301 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.413331 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.413354 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-log-socket" (OuterVolumeSpecName: "log-socket") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.413377 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.413473 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.419940 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.421647 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e8d9251-ed38-4134-b62e-f9a34bf4c755-kube-api-access-ms7q7" (OuterVolumeSpecName: "kube-api-access-ms7q7") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "kube-api-access-ms7q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.427725 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3e8d9251-ed38-4134-b62e-f9a34bf4c755" (UID: "3e8d9251-ed38-4134-b62e-f9a34bf4c755"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514426 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-run-systemd\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514480 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-run-ovn-kubernetes\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514504 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-cni-netd\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514521 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514530 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-run-systemd\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514579 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-var-lib-openvswitch\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514542 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-var-lib-openvswitch\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514615 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-cni-netd\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514619 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-etc-openvswitch\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514634 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-etc-openvswitch\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514657 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514674 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq47l\" (UniqueName: \"kubernetes.io/projected/37723039-70ec-42a0-91ca-7f3a7513f889-kube-api-access-zq47l\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514713 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-node-log\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514739 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-cni-bin\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514772 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-kubelet\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514794 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-run-ovn\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514863 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-log-socket\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514893 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/37723039-70ec-42a0-91ca-7f3a7513f889-ovn-node-metrics-cert\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514911 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-slash\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514929 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/37723039-70ec-42a0-91ca-7f3a7513f889-env-overrides\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514952 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/37723039-70ec-42a0-91ca-7f3a7513f889-ovnkube-config\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514977 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-systemd-units\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514976 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-run-ovn\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514680 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-run-ovn-kubernetes\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.514992 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-run-openvswitch\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.515016 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-node-log\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.515024 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-run-netns\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.515048 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/37723039-70ec-42a0-91ca-7f3a7513f889-ovnkube-script-lib\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.515057 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-cni-bin\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.515318 4811 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.515339 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-kubelet\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.515357 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-run-openvswitch\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.515379 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-log-socket\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.515397 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-systemd-units\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.515467 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-slash\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.515485 4811 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.515502 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/37723039-70ec-42a0-91ca-7f3a7513f889-host-run-netns\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.515950 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/37723039-70ec-42a0-91ca-7f3a7513f889-env-overrides\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.515961 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e8d9251-ed38-4134-b62e-f9a34bf4c755-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.516017 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms7q7\" (UniqueName: \"kubernetes.io/projected/3e8d9251-ed38-4134-b62e-f9a34bf4c755-kube-api-access-ms7q7\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.516042 4811 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.516058 4811 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.516070 4811 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-log-socket\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.516083 4811 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e8d9251-ed38-4134-b62e-f9a34bf4c755-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.516192 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/37723039-70ec-42a0-91ca-7f3a7513f889-ovnkube-config\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.516389 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/37723039-70ec-42a0-91ca-7f3a7513f889-ovnkube-script-lib\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.519648 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/37723039-70ec-42a0-91ca-7f3a7513f889-ovn-node-metrics-cert\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.530849 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq47l\" (UniqueName: \"kubernetes.io/projected/37723039-70ec-42a0-91ca-7f3a7513f889-kube-api-access-zq47l\") pod \"ovnkube-node-5zbtd\" (UID: \"37723039-70ec-42a0-91ca-7f3a7513f889\") " pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.717055 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.987671 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c998b_06cb0758-b33b-4730-a341-cc78a072aa5f/kube-multus/2.log" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.989185 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c998b_06cb0758-b33b-4730-a341-cc78a072aa5f/kube-multus/1.log" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.989282 4811 generic.go:334] "Generic (PLEG): container finished" podID="06cb0758-b33b-4730-a341-cc78a072aa5f" containerID="6639175e903ec54a486b5c8fc7f020e0d9fc4edcf8b04886d8660e81e0b890f5" exitCode=2 Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.989396 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c998b" event={"ID":"06cb0758-b33b-4730-a341-cc78a072aa5f","Type":"ContainerDied","Data":"6639175e903ec54a486b5c8fc7f020e0d9fc4edcf8b04886d8660e81e0b890f5"} Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.990470 4811 scope.go:117] "RemoveContainer" containerID="738df3ae5a86e625d062467d9b8983242ee4336ebd5182288c1de1774add1b8f" Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.992699 4811 generic.go:334] "Generic (PLEG): container finished" podID="37723039-70ec-42a0-91ca-7f3a7513f889" containerID="9f489c581a53d09cf038ca1fe3ca01c9596adb8f3757d73687c9a0e15f6814a4" exitCode=0 Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.992868 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" event={"ID":"37723039-70ec-42a0-91ca-7f3a7513f889","Type":"ContainerDied","Data":"9f489c581a53d09cf038ca1fe3ca01c9596adb8f3757d73687c9a0e15f6814a4"} Dec 03 00:15:52 crc kubenswrapper[4811]: I1203 00:15:52.992913 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" event={"ID":"37723039-70ec-42a0-91ca-7f3a7513f889","Type":"ContainerStarted","Data":"34e823b6e40020d1d10c2a0bdfd6919df1a0f315390a59e1216816ea45d77d10"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.006709 4811 scope.go:117] "RemoveContainer" containerID="6639175e903ec54a486b5c8fc7f020e0d9fc4edcf8b04886d8660e81e0b890f5" Dec 03 00:15:53 crc kubenswrapper[4811]: E1203 00:15:53.007642 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-c998b_openshift-multus(06cb0758-b33b-4730-a341-cc78a072aa5f)\"" pod="openshift-multus/multus-c998b" podUID="06cb0758-b33b-4730-a341-cc78a072aa5f" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.025911 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovnkube-controller/3.log" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.031906 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovn-acl-logging/0.log" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.032652 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mjj8p_3e8d9251-ed38-4134-b62e-f9a34bf4c755/ovn-controller/0.log" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033219 4811 generic.go:334] "Generic (PLEG): container finished" podID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerID="7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39" exitCode=0 Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033706 4811 generic.go:334] "Generic (PLEG): container finished" podID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerID="385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9" exitCode=0 Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033719 4811 generic.go:334] "Generic (PLEG): container finished" podID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerID="11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272" exitCode=0 Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033727 4811 generic.go:334] "Generic (PLEG): container finished" podID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerID="9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae" exitCode=0 Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033733 4811 generic.go:334] "Generic (PLEG): container finished" podID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerID="93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4" exitCode=0 Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033740 4811 generic.go:334] "Generic (PLEG): container finished" podID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerID="5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f" exitCode=0 Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033750 4811 generic.go:334] "Generic (PLEG): container finished" podID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerID="bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38" exitCode=143 Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033784 4811 generic.go:334] "Generic (PLEG): container finished" podID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" containerID="d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85" exitCode=143 Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033814 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerDied","Data":"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033851 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerDied","Data":"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033866 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerDied","Data":"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033878 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerDied","Data":"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033890 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerDied","Data":"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033901 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerDied","Data":"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033912 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033927 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033933 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033940 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033946 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033951 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033958 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033963 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033968 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033975 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033982 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerDied","Data":"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033991 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.033998 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034003 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034011 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034017 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034022 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034028 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034034 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034041 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034052 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034060 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerDied","Data":"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034068 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034075 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034080 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034086 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034092 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034097 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034771 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034783 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034788 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034793 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034802 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" event={"ID":"3e8d9251-ed38-4134-b62e-f9a34bf4c755","Type":"ContainerDied","Data":"f63f7db3d655c69aa2517a8067d1e6c173166e1d6f03a9463ffc89e084553c00"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034813 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034819 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034825 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034830 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034835 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034841 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034845 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034850 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034855 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034860 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec"} Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.034968 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mjj8p" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.054111 4811 scope.go:117] "RemoveContainer" containerID="7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.092190 4811 scope.go:117] "RemoveContainer" containerID="46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.094797 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mjj8p"] Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.103223 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mjj8p"] Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.112341 4811 scope.go:117] "RemoveContainer" containerID="385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.127875 4811 scope.go:117] "RemoveContainer" containerID="11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.144966 4811 scope.go:117] "RemoveContainer" containerID="9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.161760 4811 scope.go:117] "RemoveContainer" containerID="93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.188073 4811 scope.go:117] "RemoveContainer" containerID="5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.215824 4811 scope.go:117] "RemoveContainer" containerID="bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.230869 4811 scope.go:117] "RemoveContainer" containerID="d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.252655 4811 scope.go:117] "RemoveContainer" containerID="e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.263984 4811 scope.go:117] "RemoveContainer" containerID="7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39" Dec 03 00:15:53 crc kubenswrapper[4811]: E1203 00:15:53.264496 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39\": container with ID starting with 7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39 not found: ID does not exist" containerID="7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.264536 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39"} err="failed to get container status \"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39\": rpc error: code = NotFound desc = could not find container \"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39\": container with ID starting with 7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.264568 4811 scope.go:117] "RemoveContainer" containerID="46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96" Dec 03 00:15:53 crc kubenswrapper[4811]: E1203 00:15:53.264840 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\": container with ID starting with 46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96 not found: ID does not exist" containerID="46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.264862 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96"} err="failed to get container status \"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\": rpc error: code = NotFound desc = could not find container \"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\": container with ID starting with 46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.264879 4811 scope.go:117] "RemoveContainer" containerID="385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9" Dec 03 00:15:53 crc kubenswrapper[4811]: E1203 00:15:53.265077 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\": container with ID starting with 385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9 not found: ID does not exist" containerID="385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.265101 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9"} err="failed to get container status \"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\": rpc error: code = NotFound desc = could not find container \"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\": container with ID starting with 385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.265118 4811 scope.go:117] "RemoveContainer" containerID="11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272" Dec 03 00:15:53 crc kubenswrapper[4811]: E1203 00:15:53.265363 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\": container with ID starting with 11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272 not found: ID does not exist" containerID="11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.265388 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272"} err="failed to get container status \"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\": rpc error: code = NotFound desc = could not find container \"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\": container with ID starting with 11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.265405 4811 scope.go:117] "RemoveContainer" containerID="9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae" Dec 03 00:15:53 crc kubenswrapper[4811]: E1203 00:15:53.265620 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\": container with ID starting with 9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae not found: ID does not exist" containerID="9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.265641 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae"} err="failed to get container status \"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\": rpc error: code = NotFound desc = could not find container \"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\": container with ID starting with 9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.265656 4811 scope.go:117] "RemoveContainer" containerID="93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4" Dec 03 00:15:53 crc kubenswrapper[4811]: E1203 00:15:53.266494 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\": container with ID starting with 93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4 not found: ID does not exist" containerID="93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.266887 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4"} err="failed to get container status \"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\": rpc error: code = NotFound desc = could not find container \"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\": container with ID starting with 93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.266907 4811 scope.go:117] "RemoveContainer" containerID="5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f" Dec 03 00:15:53 crc kubenswrapper[4811]: E1203 00:15:53.267162 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\": container with ID starting with 5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f not found: ID does not exist" containerID="5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.267186 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f"} err="failed to get container status \"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\": rpc error: code = NotFound desc = could not find container \"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\": container with ID starting with 5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.267203 4811 scope.go:117] "RemoveContainer" containerID="bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38" Dec 03 00:15:53 crc kubenswrapper[4811]: E1203 00:15:53.267450 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\": container with ID starting with bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38 not found: ID does not exist" containerID="bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.267473 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38"} err="failed to get container status \"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\": rpc error: code = NotFound desc = could not find container \"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\": container with ID starting with bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.267490 4811 scope.go:117] "RemoveContainer" containerID="d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85" Dec 03 00:15:53 crc kubenswrapper[4811]: E1203 00:15:53.267786 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\": container with ID starting with d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85 not found: ID does not exist" containerID="d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.267809 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85"} err="failed to get container status \"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\": rpc error: code = NotFound desc = could not find container \"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\": container with ID starting with d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.267826 4811 scope.go:117] "RemoveContainer" containerID="e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec" Dec 03 00:15:53 crc kubenswrapper[4811]: E1203 00:15:53.268100 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\": container with ID starting with e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec not found: ID does not exist" containerID="e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.268165 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec"} err="failed to get container status \"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\": rpc error: code = NotFound desc = could not find container \"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\": container with ID starting with e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.268183 4811 scope.go:117] "RemoveContainer" containerID="7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.268577 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39"} err="failed to get container status \"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39\": rpc error: code = NotFound desc = could not find container \"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39\": container with ID starting with 7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.268598 4811 scope.go:117] "RemoveContainer" containerID="46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.268907 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96"} err="failed to get container status \"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\": rpc error: code = NotFound desc = could not find container \"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\": container with ID starting with 46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.268927 4811 scope.go:117] "RemoveContainer" containerID="385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.269389 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9"} err="failed to get container status \"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\": rpc error: code = NotFound desc = could not find container \"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\": container with ID starting with 385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.269466 4811 scope.go:117] "RemoveContainer" containerID="11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.269785 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272"} err="failed to get container status \"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\": rpc error: code = NotFound desc = could not find container \"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\": container with ID starting with 11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.269824 4811 scope.go:117] "RemoveContainer" containerID="9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.270044 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae"} err="failed to get container status \"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\": rpc error: code = NotFound desc = could not find container \"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\": container with ID starting with 9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.270074 4811 scope.go:117] "RemoveContainer" containerID="93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.270329 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4"} err="failed to get container status \"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\": rpc error: code = NotFound desc = could not find container \"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\": container with ID starting with 93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.270348 4811 scope.go:117] "RemoveContainer" containerID="5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.270537 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f"} err="failed to get container status \"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\": rpc error: code = NotFound desc = could not find container \"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\": container with ID starting with 5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.270559 4811 scope.go:117] "RemoveContainer" containerID="bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.270790 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38"} err="failed to get container status \"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\": rpc error: code = NotFound desc = could not find container \"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\": container with ID starting with bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.270811 4811 scope.go:117] "RemoveContainer" containerID="d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.271055 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85"} err="failed to get container status \"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\": rpc error: code = NotFound desc = could not find container \"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\": container with ID starting with d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.271079 4811 scope.go:117] "RemoveContainer" containerID="e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.271315 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec"} err="failed to get container status \"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\": rpc error: code = NotFound desc = could not find container \"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\": container with ID starting with e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.271335 4811 scope.go:117] "RemoveContainer" containerID="7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.273473 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39"} err="failed to get container status \"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39\": rpc error: code = NotFound desc = could not find container \"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39\": container with ID starting with 7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.273502 4811 scope.go:117] "RemoveContainer" containerID="46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.273731 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96"} err="failed to get container status \"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\": rpc error: code = NotFound desc = could not find container \"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\": container with ID starting with 46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.273751 4811 scope.go:117] "RemoveContainer" containerID="385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.274077 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9"} err="failed to get container status \"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\": rpc error: code = NotFound desc = could not find container \"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\": container with ID starting with 385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.274141 4811 scope.go:117] "RemoveContainer" containerID="11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.274468 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272"} err="failed to get container status \"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\": rpc error: code = NotFound desc = could not find container \"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\": container with ID starting with 11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.274492 4811 scope.go:117] "RemoveContainer" containerID="9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.274728 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae"} err="failed to get container status \"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\": rpc error: code = NotFound desc = could not find container \"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\": container with ID starting with 9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.274751 4811 scope.go:117] "RemoveContainer" containerID="93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.275107 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4"} err="failed to get container status \"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\": rpc error: code = NotFound desc = could not find container \"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\": container with ID starting with 93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.275148 4811 scope.go:117] "RemoveContainer" containerID="5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.275474 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f"} err="failed to get container status \"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\": rpc error: code = NotFound desc = could not find container \"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\": container with ID starting with 5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.275496 4811 scope.go:117] "RemoveContainer" containerID="bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.275699 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38"} err="failed to get container status \"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\": rpc error: code = NotFound desc = could not find container \"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\": container with ID starting with bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.275722 4811 scope.go:117] "RemoveContainer" containerID="d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.275980 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85"} err="failed to get container status \"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\": rpc error: code = NotFound desc = could not find container \"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\": container with ID starting with d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.276001 4811 scope.go:117] "RemoveContainer" containerID="e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.276266 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec"} err="failed to get container status \"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\": rpc error: code = NotFound desc = could not find container \"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\": container with ID starting with e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.276284 4811 scope.go:117] "RemoveContainer" containerID="7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.276719 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39"} err="failed to get container status \"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39\": rpc error: code = NotFound desc = could not find container \"7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39\": container with ID starting with 7924b2f232ce8009878744494f5ffd7da1e3fbbd7af6e7bb78b6fc6413befe39 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.276738 4811 scope.go:117] "RemoveContainer" containerID="46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.276960 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96"} err="failed to get container status \"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\": rpc error: code = NotFound desc = could not find container \"46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96\": container with ID starting with 46bc928e6b6f2de04d3637d75927d82cec694deecbe9fc9ac952c8a0ef82fe96 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.276980 4811 scope.go:117] "RemoveContainer" containerID="385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.277168 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9"} err="failed to get container status \"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\": rpc error: code = NotFound desc = could not find container \"385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9\": container with ID starting with 385697c46998e70ec9ce79fbe3ed665c17f423739ee27ffb595fca8777bf88c9 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.277186 4811 scope.go:117] "RemoveContainer" containerID="11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.277380 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272"} err="failed to get container status \"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\": rpc error: code = NotFound desc = could not find container \"11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272\": container with ID starting with 11cab49c161a21ecf348377aff3378ad5ea584271e888772fc835bb20f5e1272 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.277394 4811 scope.go:117] "RemoveContainer" containerID="9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.277551 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae"} err="failed to get container status \"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\": rpc error: code = NotFound desc = could not find container \"9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae\": container with ID starting with 9e0f3a0bd9d0db6db012ba6335ff569fb337fddf8704154e41e6e8b1d64193ae not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.277564 4811 scope.go:117] "RemoveContainer" containerID="93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.277843 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4"} err="failed to get container status \"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\": rpc error: code = NotFound desc = could not find container \"93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4\": container with ID starting with 93aaa821985f572acf364faa67ac5d4959afef4fdefa238f7ca1b3a9a537d8f4 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.277898 4811 scope.go:117] "RemoveContainer" containerID="5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.278199 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f"} err="failed to get container status \"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\": rpc error: code = NotFound desc = could not find container \"5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f\": container with ID starting with 5cdf796a041444685cb3b95ffd2aa093c5d9fb73446b2d377823b917145bae8f not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.278224 4811 scope.go:117] "RemoveContainer" containerID="bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.278788 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38"} err="failed to get container status \"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\": rpc error: code = NotFound desc = could not find container \"bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38\": container with ID starting with bca79dc3b5b3ad73070ed06364801cd7add55fbbd50b54218fbc4c0480f69c38 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.278807 4811 scope.go:117] "RemoveContainer" containerID="d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.279028 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85"} err="failed to get container status \"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\": rpc error: code = NotFound desc = could not find container \"d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85\": container with ID starting with d4e1abda4b77b1c9eacb45210c301e8c9eaf4fce38addf7a4bc4304dc67ebd85 not found: ID does not exist" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.279049 4811 scope.go:117] "RemoveContainer" containerID="e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec" Dec 03 00:15:53 crc kubenswrapper[4811]: I1203 00:15:53.279255 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec"} err="failed to get container status \"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\": rpc error: code = NotFound desc = could not find container \"e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec\": container with ID starting with e3fe3ee4daee21247510ae8b80222ef550dec66f80f37cf2c2900ebced8bbcec not found: ID does not exist" Dec 03 00:15:54 crc kubenswrapper[4811]: I1203 00:15:54.040488 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c998b_06cb0758-b33b-4730-a341-cc78a072aa5f/kube-multus/2.log" Dec 03 00:15:54 crc kubenswrapper[4811]: I1203 00:15:54.044610 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" event={"ID":"37723039-70ec-42a0-91ca-7f3a7513f889","Type":"ContainerStarted","Data":"b78cef84f8974985a427bb6a1b9ee325befb0af79b26e82be9098e8369fb77aa"} Dec 03 00:15:54 crc kubenswrapper[4811]: I1203 00:15:54.044645 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" event={"ID":"37723039-70ec-42a0-91ca-7f3a7513f889","Type":"ContainerStarted","Data":"5873db9bb1443d2ed8bcd4af03a91c4f36913566001b700cd5297e7b8e4e3e12"} Dec 03 00:15:54 crc kubenswrapper[4811]: I1203 00:15:54.044656 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" event={"ID":"37723039-70ec-42a0-91ca-7f3a7513f889","Type":"ContainerStarted","Data":"ea1cace815584d4ce3bbdddf30d5ad2c6f2c24b6c372fa9e3ed5c4353c833314"} Dec 03 00:15:54 crc kubenswrapper[4811]: I1203 00:15:54.044666 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" event={"ID":"37723039-70ec-42a0-91ca-7f3a7513f889","Type":"ContainerStarted","Data":"8ba7719aa918f506a4a0eed29838d67f0a61e73582e61d5f5b31b67fc4f86d3d"} Dec 03 00:15:54 crc kubenswrapper[4811]: I1203 00:15:54.044674 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" event={"ID":"37723039-70ec-42a0-91ca-7f3a7513f889","Type":"ContainerStarted","Data":"755032366a82a162b891a6cd85368f2b0b16cfd7aeb75aa994ce5152265b5c62"} Dec 03 00:15:54 crc kubenswrapper[4811]: I1203 00:15:54.044684 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" event={"ID":"37723039-70ec-42a0-91ca-7f3a7513f889","Type":"ContainerStarted","Data":"9ced9aa99524f0c5cb4e84eb0095707af598e41d836da94cd3f4f867c1207a33"} Dec 03 00:15:54 crc kubenswrapper[4811]: I1203 00:15:54.122062 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e8d9251-ed38-4134-b62e-f9a34bf4c755" path="/var/lib/kubelet/pods/3e8d9251-ed38-4134-b62e-f9a34bf4c755/volumes" Dec 03 00:15:56 crc kubenswrapper[4811]: I1203 00:15:56.060489 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" event={"ID":"37723039-70ec-42a0-91ca-7f3a7513f889","Type":"ContainerStarted","Data":"fa894c1d49144aff9127a28b85a7c556434359ece705bd16bbc1ff5def5b58e1"} Dec 03 00:15:59 crc kubenswrapper[4811]: I1203 00:15:59.083534 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" event={"ID":"37723039-70ec-42a0-91ca-7f3a7513f889","Type":"ContainerStarted","Data":"109b59972790a78a8b528139ab1b60ddec3237458ea16c19f16dba38a0e7cdf7"} Dec 03 00:15:59 crc kubenswrapper[4811]: I1203 00:15:59.084355 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:59 crc kubenswrapper[4811]: I1203 00:15:59.084374 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:15:59 crc kubenswrapper[4811]: I1203 00:15:59.121287 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" podStartSLOduration=7.121230502 podStartE2EDuration="7.121230502s" podCreationTimestamp="2025-12-03 00:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:15:59.118750121 +0000 UTC m=+599.260579593" watchObservedRunningTime="2025-12-03 00:15:59.121230502 +0000 UTC m=+599.263059974" Dec 03 00:15:59 crc kubenswrapper[4811]: I1203 00:15:59.127762 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:16:00 crc kubenswrapper[4811]: I1203 00:16:00.096346 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:16:00 crc kubenswrapper[4811]: I1203 00:16:00.133035 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:16:02 crc kubenswrapper[4811]: I1203 00:16:02.940756 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:16:02 crc kubenswrapper[4811]: I1203 00:16:02.940834 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:16:05 crc kubenswrapper[4811]: I1203 00:16:05.115610 4811 scope.go:117] "RemoveContainer" containerID="6639175e903ec54a486b5c8fc7f020e0d9fc4edcf8b04886d8660e81e0b890f5" Dec 03 00:16:05 crc kubenswrapper[4811]: E1203 00:16:05.116245 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-c998b_openshift-multus(06cb0758-b33b-4730-a341-cc78a072aa5f)\"" pod="openshift-multus/multus-c998b" podUID="06cb0758-b33b-4730-a341-cc78a072aa5f" Dec 03 00:16:18 crc kubenswrapper[4811]: I1203 00:16:18.115385 4811 scope.go:117] "RemoveContainer" containerID="6639175e903ec54a486b5c8fc7f020e0d9fc4edcf8b04886d8660e81e0b890f5" Dec 03 00:16:19 crc kubenswrapper[4811]: I1203 00:16:19.225985 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c998b_06cb0758-b33b-4730-a341-cc78a072aa5f/kube-multus/2.log" Dec 03 00:16:19 crc kubenswrapper[4811]: I1203 00:16:19.226601 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c998b" event={"ID":"06cb0758-b33b-4730-a341-cc78a072aa5f","Type":"ContainerStarted","Data":"98db4209d5954dadd7750518d3de6405b9274e3738f5f538cc748a677a3f5f23"} Dec 03 00:16:22 crc kubenswrapper[4811]: I1203 00:16:22.745057 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5zbtd" Dec 03 00:16:32 crc kubenswrapper[4811]: I1203 00:16:32.940973 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:16:32 crc kubenswrapper[4811]: I1203 00:16:32.941945 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:16:32 crc kubenswrapper[4811]: I1203 00:16:32.942020 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:16:32 crc kubenswrapper[4811]: I1203 00:16:32.944149 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fbc3e78d8acc5df7781522124d991f4e42780ce8b0fd9b01a7c2846f764d716"} pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:16:32 crc kubenswrapper[4811]: I1203 00:16:32.944250 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" containerID="cri-o://7fbc3e78d8acc5df7781522124d991f4e42780ce8b0fd9b01a7c2846f764d716" gracePeriod=600 Dec 03 00:16:33 crc kubenswrapper[4811]: I1203 00:16:33.332667 4811 generic.go:334] "Generic (PLEG): container finished" podID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerID="7fbc3e78d8acc5df7781522124d991f4e42780ce8b0fd9b01a7c2846f764d716" exitCode=0 Dec 03 00:16:33 crc kubenswrapper[4811]: I1203 00:16:33.332729 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerDied","Data":"7fbc3e78d8acc5df7781522124d991f4e42780ce8b0fd9b01a7c2846f764d716"} Dec 03 00:16:33 crc kubenswrapper[4811]: I1203 00:16:33.332765 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerStarted","Data":"e6b2aa2a2ddd7fd20474659fdb1c709c86b66b5560f41a3dec0d4ef06fe80f30"} Dec 03 00:16:33 crc kubenswrapper[4811]: I1203 00:16:33.332784 4811 scope.go:117] "RemoveContainer" containerID="10561c3fa5ec63e76b89f65f6adfa64f4786ff83527fb29ebb98d13b1546c538" Dec 03 00:17:04 crc kubenswrapper[4811]: I1203 00:17:04.704151 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kj29q"] Dec 03 00:17:04 crc kubenswrapper[4811]: I1203 00:17:04.705298 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kj29q" podUID="b71b215c-c0c4-49e6-aa06-a4025a1fd22d" containerName="registry-server" containerID="cri-o://360ec8a29af5bf468375f879adf24b92d19bb9f953f90ae563eb950f8a89a5b4" gracePeriod=30 Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.054751 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.119534 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-catalog-content\") pod \"b71b215c-c0c4-49e6-aa06-a4025a1fd22d\" (UID: \"b71b215c-c0c4-49e6-aa06-a4025a1fd22d\") " Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.119607 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ggqk\" (UniqueName: \"kubernetes.io/projected/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-kube-api-access-5ggqk\") pod \"b71b215c-c0c4-49e6-aa06-a4025a1fd22d\" (UID: \"b71b215c-c0c4-49e6-aa06-a4025a1fd22d\") " Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.119657 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-utilities\") pod \"b71b215c-c0c4-49e6-aa06-a4025a1fd22d\" (UID: \"b71b215c-c0c4-49e6-aa06-a4025a1fd22d\") " Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.121113 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-utilities" (OuterVolumeSpecName: "utilities") pod "b71b215c-c0c4-49e6-aa06-a4025a1fd22d" (UID: "b71b215c-c0c4-49e6-aa06-a4025a1fd22d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.129765 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-kube-api-access-5ggqk" (OuterVolumeSpecName: "kube-api-access-5ggqk") pod "b71b215c-c0c4-49e6-aa06-a4025a1fd22d" (UID: "b71b215c-c0c4-49e6-aa06-a4025a1fd22d"). InnerVolumeSpecName "kube-api-access-5ggqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.143480 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b71b215c-c0c4-49e6-aa06-a4025a1fd22d" (UID: "b71b215c-c0c4-49e6-aa06-a4025a1fd22d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.221411 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.221456 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ggqk\" (UniqueName: \"kubernetes.io/projected/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-kube-api-access-5ggqk\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.221467 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b71b215c-c0c4-49e6-aa06-a4025a1fd22d-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.556479 4811 generic.go:334] "Generic (PLEG): container finished" podID="b71b215c-c0c4-49e6-aa06-a4025a1fd22d" containerID="360ec8a29af5bf468375f879adf24b92d19bb9f953f90ae563eb950f8a89a5b4" exitCode=0 Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.556538 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kj29q" event={"ID":"b71b215c-c0c4-49e6-aa06-a4025a1fd22d","Type":"ContainerDied","Data":"360ec8a29af5bf468375f879adf24b92d19bb9f953f90ae563eb950f8a89a5b4"} Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.556580 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kj29q" event={"ID":"b71b215c-c0c4-49e6-aa06-a4025a1fd22d","Type":"ContainerDied","Data":"f7c31e1cbea71ccc88b75d47d306f3d60759080a7462ff907d75de84e2312cf7"} Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.556623 4811 scope.go:117] "RemoveContainer" containerID="360ec8a29af5bf468375f879adf24b92d19bb9f953f90ae563eb950f8a89a5b4" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.556683 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kj29q" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.577827 4811 scope.go:117] "RemoveContainer" containerID="5bad260441bffc201d9b0cdcea075530bcfe4318f39788776dc24a048f7fdd08" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.597323 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kj29q"] Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.601369 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kj29q"] Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.609853 4811 scope.go:117] "RemoveContainer" containerID="a6e8652dbed505a632d4b7be6971d2a64f7fe04a5854a9ba47a8a7f9591c210f" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.630237 4811 scope.go:117] "RemoveContainer" containerID="360ec8a29af5bf468375f879adf24b92d19bb9f953f90ae563eb950f8a89a5b4" Dec 03 00:17:05 crc kubenswrapper[4811]: E1203 00:17:05.632604 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"360ec8a29af5bf468375f879adf24b92d19bb9f953f90ae563eb950f8a89a5b4\": container with ID starting with 360ec8a29af5bf468375f879adf24b92d19bb9f953f90ae563eb950f8a89a5b4 not found: ID does not exist" containerID="360ec8a29af5bf468375f879adf24b92d19bb9f953f90ae563eb950f8a89a5b4" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.632668 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"360ec8a29af5bf468375f879adf24b92d19bb9f953f90ae563eb950f8a89a5b4"} err="failed to get container status \"360ec8a29af5bf468375f879adf24b92d19bb9f953f90ae563eb950f8a89a5b4\": rpc error: code = NotFound desc = could not find container \"360ec8a29af5bf468375f879adf24b92d19bb9f953f90ae563eb950f8a89a5b4\": container with ID starting with 360ec8a29af5bf468375f879adf24b92d19bb9f953f90ae563eb950f8a89a5b4 not found: ID does not exist" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.632710 4811 scope.go:117] "RemoveContainer" containerID="5bad260441bffc201d9b0cdcea075530bcfe4318f39788776dc24a048f7fdd08" Dec 03 00:17:05 crc kubenswrapper[4811]: E1203 00:17:05.634068 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bad260441bffc201d9b0cdcea075530bcfe4318f39788776dc24a048f7fdd08\": container with ID starting with 5bad260441bffc201d9b0cdcea075530bcfe4318f39788776dc24a048f7fdd08 not found: ID does not exist" containerID="5bad260441bffc201d9b0cdcea075530bcfe4318f39788776dc24a048f7fdd08" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.634128 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bad260441bffc201d9b0cdcea075530bcfe4318f39788776dc24a048f7fdd08"} err="failed to get container status \"5bad260441bffc201d9b0cdcea075530bcfe4318f39788776dc24a048f7fdd08\": rpc error: code = NotFound desc = could not find container \"5bad260441bffc201d9b0cdcea075530bcfe4318f39788776dc24a048f7fdd08\": container with ID starting with 5bad260441bffc201d9b0cdcea075530bcfe4318f39788776dc24a048f7fdd08 not found: ID does not exist" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.634166 4811 scope.go:117] "RemoveContainer" containerID="a6e8652dbed505a632d4b7be6971d2a64f7fe04a5854a9ba47a8a7f9591c210f" Dec 03 00:17:05 crc kubenswrapper[4811]: E1203 00:17:05.634945 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6e8652dbed505a632d4b7be6971d2a64f7fe04a5854a9ba47a8a7f9591c210f\": container with ID starting with a6e8652dbed505a632d4b7be6971d2a64f7fe04a5854a9ba47a8a7f9591c210f not found: ID does not exist" containerID="a6e8652dbed505a632d4b7be6971d2a64f7fe04a5854a9ba47a8a7f9591c210f" Dec 03 00:17:05 crc kubenswrapper[4811]: I1203 00:17:05.635019 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6e8652dbed505a632d4b7be6971d2a64f7fe04a5854a9ba47a8a7f9591c210f"} err="failed to get container status \"a6e8652dbed505a632d4b7be6971d2a64f7fe04a5854a9ba47a8a7f9591c210f\": rpc error: code = NotFound desc = could not find container \"a6e8652dbed505a632d4b7be6971d2a64f7fe04a5854a9ba47a8a7f9591c210f\": container with ID starting with a6e8652dbed505a632d4b7be6971d2a64f7fe04a5854a9ba47a8a7f9591c210f not found: ID does not exist" Dec 03 00:17:06 crc kubenswrapper[4811]: I1203 00:17:06.123308 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b71b215c-c0c4-49e6-aa06-a4025a1fd22d" path="/var/lib/kubelet/pods/b71b215c-c0c4-49e6-aa06-a4025a1fd22d/volumes" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.713179 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p"] Dec 03 00:17:08 crc kubenswrapper[4811]: E1203 00:17:08.713902 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71b215c-c0c4-49e6-aa06-a4025a1fd22d" containerName="registry-server" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.713919 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71b215c-c0c4-49e6-aa06-a4025a1fd22d" containerName="registry-server" Dec 03 00:17:08 crc kubenswrapper[4811]: E1203 00:17:08.713936 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71b215c-c0c4-49e6-aa06-a4025a1fd22d" containerName="extract-utilities" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.713944 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71b215c-c0c4-49e6-aa06-a4025a1fd22d" containerName="extract-utilities" Dec 03 00:17:08 crc kubenswrapper[4811]: E1203 00:17:08.713958 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71b215c-c0c4-49e6-aa06-a4025a1fd22d" containerName="extract-content" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.713967 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71b215c-c0c4-49e6-aa06-a4025a1fd22d" containerName="extract-content" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.714074 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71b215c-c0c4-49e6-aa06-a4025a1fd22d" containerName="registry-server" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.715077 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.719192 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.731731 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p"] Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.773571 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096bc1af-99b4-4653-8f27-7f030927b726-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p\" (UID: \"096bc1af-99b4-4653-8f27-7f030927b726\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.773646 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2q4j\" (UniqueName: \"kubernetes.io/projected/096bc1af-99b4-4653-8f27-7f030927b726-kube-api-access-t2q4j\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p\" (UID: \"096bc1af-99b4-4653-8f27-7f030927b726\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.773699 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096bc1af-99b4-4653-8f27-7f030927b726-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p\" (UID: \"096bc1af-99b4-4653-8f27-7f030927b726\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.875318 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096bc1af-99b4-4653-8f27-7f030927b726-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p\" (UID: \"096bc1af-99b4-4653-8f27-7f030927b726\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.875438 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2q4j\" (UniqueName: \"kubernetes.io/projected/096bc1af-99b4-4653-8f27-7f030927b726-kube-api-access-t2q4j\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p\" (UID: \"096bc1af-99b4-4653-8f27-7f030927b726\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.875499 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096bc1af-99b4-4653-8f27-7f030927b726-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p\" (UID: \"096bc1af-99b4-4653-8f27-7f030927b726\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.875947 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096bc1af-99b4-4653-8f27-7f030927b726-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p\" (UID: \"096bc1af-99b4-4653-8f27-7f030927b726\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.876417 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096bc1af-99b4-4653-8f27-7f030927b726-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p\" (UID: \"096bc1af-99b4-4653-8f27-7f030927b726\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" Dec 03 00:17:08 crc kubenswrapper[4811]: I1203 00:17:08.899455 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2q4j\" (UniqueName: \"kubernetes.io/projected/096bc1af-99b4-4653-8f27-7f030927b726-kube-api-access-t2q4j\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p\" (UID: \"096bc1af-99b4-4653-8f27-7f030927b726\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" Dec 03 00:17:09 crc kubenswrapper[4811]: I1203 00:17:09.036953 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" Dec 03 00:17:09 crc kubenswrapper[4811]: I1203 00:17:09.270864 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p"] Dec 03 00:17:09 crc kubenswrapper[4811]: I1203 00:17:09.582686 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" event={"ID":"096bc1af-99b4-4653-8f27-7f030927b726","Type":"ContainerStarted","Data":"c84ea477c55454d7bb914448ddc9eeead96a5ed7c435ee43a5abcd1607810d50"} Dec 03 00:17:09 crc kubenswrapper[4811]: I1203 00:17:09.582791 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" event={"ID":"096bc1af-99b4-4653-8f27-7f030927b726","Type":"ContainerStarted","Data":"d951f16332266f3aad6b296c5bfad148658090fc3f05d9326eda372cea1411de"} Dec 03 00:17:10 crc kubenswrapper[4811]: I1203 00:17:10.591089 4811 generic.go:334] "Generic (PLEG): container finished" podID="096bc1af-99b4-4653-8f27-7f030927b726" containerID="c84ea477c55454d7bb914448ddc9eeead96a5ed7c435ee43a5abcd1607810d50" exitCode=0 Dec 03 00:17:10 crc kubenswrapper[4811]: I1203 00:17:10.591145 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" event={"ID":"096bc1af-99b4-4653-8f27-7f030927b726","Type":"ContainerDied","Data":"c84ea477c55454d7bb914448ddc9eeead96a5ed7c435ee43a5abcd1607810d50"} Dec 03 00:17:10 crc kubenswrapper[4811]: I1203 00:17:10.593960 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 00:17:12 crc kubenswrapper[4811]: I1203 00:17:12.603538 4811 generic.go:334] "Generic (PLEG): container finished" podID="096bc1af-99b4-4653-8f27-7f030927b726" containerID="510fcac7d29f6f633d9addc9ab57da24f640677300013409b4d39013a87c64c6" exitCode=0 Dec 03 00:17:12 crc kubenswrapper[4811]: I1203 00:17:12.603995 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" event={"ID":"096bc1af-99b4-4653-8f27-7f030927b726","Type":"ContainerDied","Data":"510fcac7d29f6f633d9addc9ab57da24f640677300013409b4d39013a87c64c6"} Dec 03 00:17:13 crc kubenswrapper[4811]: I1203 00:17:13.612441 4811 generic.go:334] "Generic (PLEG): container finished" podID="096bc1af-99b4-4653-8f27-7f030927b726" containerID="ba0ecf5eb7100c110ea3c1a1f8de78f1a09161b5340b0b8474b399e1b79ee2cd" exitCode=0 Dec 03 00:17:13 crc kubenswrapper[4811]: I1203 00:17:13.612488 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" event={"ID":"096bc1af-99b4-4653-8f27-7f030927b726","Type":"ContainerDied","Data":"ba0ecf5eb7100c110ea3c1a1f8de78f1a09161b5340b0b8474b399e1b79ee2cd"} Dec 03 00:17:14 crc kubenswrapper[4811]: I1203 00:17:14.848579 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" Dec 03 00:17:14 crc kubenswrapper[4811]: I1203 00:17:14.972843 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096bc1af-99b4-4653-8f27-7f030927b726-util\") pod \"096bc1af-99b4-4653-8f27-7f030927b726\" (UID: \"096bc1af-99b4-4653-8f27-7f030927b726\") " Dec 03 00:17:14 crc kubenswrapper[4811]: I1203 00:17:14.973049 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2q4j\" (UniqueName: \"kubernetes.io/projected/096bc1af-99b4-4653-8f27-7f030927b726-kube-api-access-t2q4j\") pod \"096bc1af-99b4-4653-8f27-7f030927b726\" (UID: \"096bc1af-99b4-4653-8f27-7f030927b726\") " Dec 03 00:17:14 crc kubenswrapper[4811]: I1203 00:17:14.973200 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096bc1af-99b4-4653-8f27-7f030927b726-bundle\") pod \"096bc1af-99b4-4653-8f27-7f030927b726\" (UID: \"096bc1af-99b4-4653-8f27-7f030927b726\") " Dec 03 00:17:14 crc kubenswrapper[4811]: I1203 00:17:14.975674 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/096bc1af-99b4-4653-8f27-7f030927b726-bundle" (OuterVolumeSpecName: "bundle") pod "096bc1af-99b4-4653-8f27-7f030927b726" (UID: "096bc1af-99b4-4653-8f27-7f030927b726"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:17:14 crc kubenswrapper[4811]: I1203 00:17:14.981981 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/096bc1af-99b4-4653-8f27-7f030927b726-kube-api-access-t2q4j" (OuterVolumeSpecName: "kube-api-access-t2q4j") pod "096bc1af-99b4-4653-8f27-7f030927b726" (UID: "096bc1af-99b4-4653-8f27-7f030927b726"). InnerVolumeSpecName "kube-api-access-t2q4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:17:14 crc kubenswrapper[4811]: I1203 00:17:14.991252 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/096bc1af-99b4-4653-8f27-7f030927b726-util" (OuterVolumeSpecName: "util") pod "096bc1af-99b4-4653-8f27-7f030927b726" (UID: "096bc1af-99b4-4653-8f27-7f030927b726"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:17:15 crc kubenswrapper[4811]: I1203 00:17:15.075053 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2q4j\" (UniqueName: \"kubernetes.io/projected/096bc1af-99b4-4653-8f27-7f030927b726-kube-api-access-t2q4j\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:15 crc kubenswrapper[4811]: I1203 00:17:15.075120 4811 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096bc1af-99b4-4653-8f27-7f030927b726-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:15 crc kubenswrapper[4811]: I1203 00:17:15.075135 4811 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096bc1af-99b4-4653-8f27-7f030927b726-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:15 crc kubenswrapper[4811]: I1203 00:17:15.628949 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" event={"ID":"096bc1af-99b4-4653-8f27-7f030927b726","Type":"ContainerDied","Data":"d951f16332266f3aad6b296c5bfad148658090fc3f05d9326eda372cea1411de"} Dec 03 00:17:15 crc kubenswrapper[4811]: I1203 00:17:15.629023 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d951f16332266f3aad6b296c5bfad148658090fc3f05d9326eda372cea1411de" Dec 03 00:17:15 crc kubenswrapper[4811]: I1203 00:17:15.628989 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p" Dec 03 00:17:16 crc kubenswrapper[4811]: I1203 00:17:16.897414 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj"] Dec 03 00:17:16 crc kubenswrapper[4811]: E1203 00:17:16.897660 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096bc1af-99b4-4653-8f27-7f030927b726" containerName="extract" Dec 03 00:17:16 crc kubenswrapper[4811]: I1203 00:17:16.897676 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="096bc1af-99b4-4653-8f27-7f030927b726" containerName="extract" Dec 03 00:17:16 crc kubenswrapper[4811]: E1203 00:17:16.897690 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096bc1af-99b4-4653-8f27-7f030927b726" containerName="pull" Dec 03 00:17:16 crc kubenswrapper[4811]: I1203 00:17:16.897696 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="096bc1af-99b4-4653-8f27-7f030927b726" containerName="pull" Dec 03 00:17:16 crc kubenswrapper[4811]: E1203 00:17:16.897710 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096bc1af-99b4-4653-8f27-7f030927b726" containerName="util" Dec 03 00:17:16 crc kubenswrapper[4811]: I1203 00:17:16.897717 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="096bc1af-99b4-4653-8f27-7f030927b726" containerName="util" Dec 03 00:17:16 crc kubenswrapper[4811]: I1203 00:17:16.897822 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="096bc1af-99b4-4653-8f27-7f030927b726" containerName="extract" Dec 03 00:17:16 crc kubenswrapper[4811]: I1203 00:17:16.898640 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" Dec 03 00:17:16 crc kubenswrapper[4811]: I1203 00:17:16.901139 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 03 00:17:16 crc kubenswrapper[4811]: I1203 00:17:16.912504 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj"] Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.004595 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj\" (UID: \"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.004703 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj\" (UID: \"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.004762 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnjkr\" (UniqueName: \"kubernetes.io/projected/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-kube-api-access-tnjkr\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj\" (UID: \"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.106199 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnjkr\" (UniqueName: \"kubernetes.io/projected/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-kube-api-access-tnjkr\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj\" (UID: \"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.106598 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj\" (UID: \"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.106750 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj\" (UID: \"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.107333 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj\" (UID: \"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.107662 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj\" (UID: \"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.128854 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnjkr\" (UniqueName: \"kubernetes.io/projected/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-kube-api-access-tnjkr\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj\" (UID: \"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.221629 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.446488 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj"] Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.648425 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" event={"ID":"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0","Type":"ContainerStarted","Data":"f824b8af9e367e5f9742a60fb620232a77881fc2cb52cc86f0671867218f9f62"} Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.648931 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" event={"ID":"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0","Type":"ContainerStarted","Data":"96c5540a47f369a4fcb73945246c4759934b3b86084fd227a1e5a9a728b6f803"} Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.706172 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g"] Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.708811 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.718767 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g"] Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.816086 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40626377-4781-4a9f-b83b-ec64b75bb4e9-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g\" (UID: \"40626377-4781-4a9f-b83b-ec64b75bb4e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.816162 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40626377-4781-4a9f-b83b-ec64b75bb4e9-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g\" (UID: \"40626377-4781-4a9f-b83b-ec64b75bb4e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.816192 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f2cv\" (UniqueName: \"kubernetes.io/projected/40626377-4781-4a9f-b83b-ec64b75bb4e9-kube-api-access-7f2cv\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g\" (UID: \"40626377-4781-4a9f-b83b-ec64b75bb4e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.917953 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40626377-4781-4a9f-b83b-ec64b75bb4e9-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g\" (UID: \"40626377-4781-4a9f-b83b-ec64b75bb4e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.918041 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40626377-4781-4a9f-b83b-ec64b75bb4e9-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g\" (UID: \"40626377-4781-4a9f-b83b-ec64b75bb4e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.918090 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f2cv\" (UniqueName: \"kubernetes.io/projected/40626377-4781-4a9f-b83b-ec64b75bb4e9-kube-api-access-7f2cv\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g\" (UID: \"40626377-4781-4a9f-b83b-ec64b75bb4e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.919139 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40626377-4781-4a9f-b83b-ec64b75bb4e9-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g\" (UID: \"40626377-4781-4a9f-b83b-ec64b75bb4e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.919630 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40626377-4781-4a9f-b83b-ec64b75bb4e9-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g\" (UID: \"40626377-4781-4a9f-b83b-ec64b75bb4e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" Dec 03 00:17:17 crc kubenswrapper[4811]: I1203 00:17:17.939607 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f2cv\" (UniqueName: \"kubernetes.io/projected/40626377-4781-4a9f-b83b-ec64b75bb4e9-kube-api-access-7f2cv\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g\" (UID: \"40626377-4781-4a9f-b83b-ec64b75bb4e9\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" Dec 03 00:17:18 crc kubenswrapper[4811]: I1203 00:17:18.053632 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" Dec 03 00:17:18 crc kubenswrapper[4811]: I1203 00:17:18.302694 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g"] Dec 03 00:17:18 crc kubenswrapper[4811]: I1203 00:17:18.657115 4811 generic.go:334] "Generic (PLEG): container finished" podID="40626377-4781-4a9f-b83b-ec64b75bb4e9" containerID="370912ebfaf377d24463e2fb8e4c74e0fe8ef4fe8bb1c7c3cf581c77dc10f8fa" exitCode=0 Dec 03 00:17:18 crc kubenswrapper[4811]: I1203 00:17:18.657189 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" event={"ID":"40626377-4781-4a9f-b83b-ec64b75bb4e9","Type":"ContainerDied","Data":"370912ebfaf377d24463e2fb8e4c74e0fe8ef4fe8bb1c7c3cf581c77dc10f8fa"} Dec 03 00:17:18 crc kubenswrapper[4811]: I1203 00:17:18.657223 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" event={"ID":"40626377-4781-4a9f-b83b-ec64b75bb4e9","Type":"ContainerStarted","Data":"693392db9901ebde3fb71620d0eee43bd0a18f769986907b4107caee73db4f03"} Dec 03 00:17:18 crc kubenswrapper[4811]: I1203 00:17:18.660476 4811 generic.go:334] "Generic (PLEG): container finished" podID="85f78e40-d54c-4871-93c8-8ec0c9bfa5a0" containerID="f824b8af9e367e5f9742a60fb620232a77881fc2cb52cc86f0671867218f9f62" exitCode=0 Dec 03 00:17:18 crc kubenswrapper[4811]: I1203 00:17:18.660501 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" event={"ID":"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0","Type":"ContainerDied","Data":"f824b8af9e367e5f9742a60fb620232a77881fc2cb52cc86f0671867218f9f62"} Dec 03 00:17:19 crc kubenswrapper[4811]: I1203 00:17:19.668482 4811 generic.go:334] "Generic (PLEG): container finished" podID="85f78e40-d54c-4871-93c8-8ec0c9bfa5a0" containerID="6676798295de89d0f975d912ac92a790cdabdddfb41c314d1d045aa732f2e96b" exitCode=0 Dec 03 00:17:19 crc kubenswrapper[4811]: I1203 00:17:19.668611 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" event={"ID":"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0","Type":"ContainerDied","Data":"6676798295de89d0f975d912ac92a790cdabdddfb41c314d1d045aa732f2e96b"} Dec 03 00:17:20 crc kubenswrapper[4811]: I1203 00:17:20.676603 4811 generic.go:334] "Generic (PLEG): container finished" podID="85f78e40-d54c-4871-93c8-8ec0c9bfa5a0" containerID="25e4256c0abdfbdc5fa42c256958b5d2e97d5883ad6392c01da659edf09655da" exitCode=0 Dec 03 00:17:20 crc kubenswrapper[4811]: I1203 00:17:20.676672 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" event={"ID":"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0","Type":"ContainerDied","Data":"25e4256c0abdfbdc5fa42c256958b5d2e97d5883ad6392c01da659edf09655da"} Dec 03 00:17:20 crc kubenswrapper[4811]: I1203 00:17:20.679470 4811 generic.go:334] "Generic (PLEG): container finished" podID="40626377-4781-4a9f-b83b-ec64b75bb4e9" containerID="849d7d177df35d49fcb0fbafe3cd3e32986ad0f4512c3c340bf8eb702cf11054" exitCode=0 Dec 03 00:17:20 crc kubenswrapper[4811]: I1203 00:17:20.679533 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" event={"ID":"40626377-4781-4a9f-b83b-ec64b75bb4e9","Type":"ContainerDied","Data":"849d7d177df35d49fcb0fbafe3cd3e32986ad0f4512c3c340bf8eb702cf11054"} Dec 03 00:17:21 crc kubenswrapper[4811]: I1203 00:17:21.689439 4811 generic.go:334] "Generic (PLEG): container finished" podID="40626377-4781-4a9f-b83b-ec64b75bb4e9" containerID="b555e8d93fc969a4a1ae54ba9212e41e8e86abdc16ae59ed046d4556716bcf79" exitCode=0 Dec 03 00:17:21 crc kubenswrapper[4811]: I1203 00:17:21.690423 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" event={"ID":"40626377-4781-4a9f-b83b-ec64b75bb4e9","Type":"ContainerDied","Data":"b555e8d93fc969a4a1ae54ba9212e41e8e86abdc16ae59ed046d4556716bcf79"} Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.032585 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.086244 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnjkr\" (UniqueName: \"kubernetes.io/projected/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-kube-api-access-tnjkr\") pod \"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0\" (UID: \"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0\") " Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.086326 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-util\") pod \"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0\" (UID: \"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0\") " Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.086449 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-bundle\") pod \"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0\" (UID: \"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0\") " Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.087836 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-bundle" (OuterVolumeSpecName: "bundle") pod "85f78e40-d54c-4871-93c8-8ec0c9bfa5a0" (UID: "85f78e40-d54c-4871-93c8-8ec0c9bfa5a0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.107464 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-kube-api-access-tnjkr" (OuterVolumeSpecName: "kube-api-access-tnjkr") pod "85f78e40-d54c-4871-93c8-8ec0c9bfa5a0" (UID: "85f78e40-d54c-4871-93c8-8ec0c9bfa5a0"). InnerVolumeSpecName "kube-api-access-tnjkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.119773 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-util" (OuterVolumeSpecName: "util") pod "85f78e40-d54c-4871-93c8-8ec0c9bfa5a0" (UID: "85f78e40-d54c-4871-93c8-8ec0c9bfa5a0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.187611 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnjkr\" (UniqueName: \"kubernetes.io/projected/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-kube-api-access-tnjkr\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.187646 4811 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.187657 4811 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85f78e40-d54c-4871-93c8-8ec0c9bfa5a0-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.589983 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t"] Dec 03 00:17:22 crc kubenswrapper[4811]: E1203 00:17:22.590601 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f78e40-d54c-4871-93c8-8ec0c9bfa5a0" containerName="extract" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.590618 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f78e40-d54c-4871-93c8-8ec0c9bfa5a0" containerName="extract" Dec 03 00:17:22 crc kubenswrapper[4811]: E1203 00:17:22.590638 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f78e40-d54c-4871-93c8-8ec0c9bfa5a0" containerName="util" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.590643 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f78e40-d54c-4871-93c8-8ec0c9bfa5a0" containerName="util" Dec 03 00:17:22 crc kubenswrapper[4811]: E1203 00:17:22.590651 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f78e40-d54c-4871-93c8-8ec0c9bfa5a0" containerName="pull" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.590657 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f78e40-d54c-4871-93c8-8ec0c9bfa5a0" containerName="pull" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.590771 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f78e40-d54c-4871-93c8-8ec0c9bfa5a0" containerName="extract" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.591585 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.606673 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t"] Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.695044 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4w7b\" (UniqueName: \"kubernetes.io/projected/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-kube-api-access-l4w7b\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t\" (UID: \"bb2dfcce-0915-4f43-9a77-ad32fb713d1c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.695203 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t\" (UID: \"bb2dfcce-0915-4f43-9a77-ad32fb713d1c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.695247 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t\" (UID: \"bb2dfcce-0915-4f43-9a77-ad32fb713d1c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.698547 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.699274 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj" event={"ID":"85f78e40-d54c-4871-93c8-8ec0c9bfa5a0","Type":"ContainerDied","Data":"96c5540a47f369a4fcb73945246c4759934b3b86084fd227a1e5a9a728b6f803"} Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.699310 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96c5540a47f369a4fcb73945246c4759934b3b86084fd227a1e5a9a728b6f803" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.796145 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4w7b\" (UniqueName: \"kubernetes.io/projected/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-kube-api-access-l4w7b\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t\" (UID: \"bb2dfcce-0915-4f43-9a77-ad32fb713d1c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.796289 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t\" (UID: \"bb2dfcce-0915-4f43-9a77-ad32fb713d1c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.796364 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t\" (UID: \"bb2dfcce-0915-4f43-9a77-ad32fb713d1c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.796880 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t\" (UID: \"bb2dfcce-0915-4f43-9a77-ad32fb713d1c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.797005 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t\" (UID: \"bb2dfcce-0915-4f43-9a77-ad32fb713d1c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.848561 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4w7b\" (UniqueName: \"kubernetes.io/projected/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-kube-api-access-l4w7b\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t\" (UID: \"bb2dfcce-0915-4f43-9a77-ad32fb713d1c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" Dec 03 00:17:22 crc kubenswrapper[4811]: I1203 00:17:22.906526 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.341459 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.405966 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40626377-4781-4a9f-b83b-ec64b75bb4e9-util\") pod \"40626377-4781-4a9f-b83b-ec64b75bb4e9\" (UID: \"40626377-4781-4a9f-b83b-ec64b75bb4e9\") " Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.406125 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40626377-4781-4a9f-b83b-ec64b75bb4e9-bundle\") pod \"40626377-4781-4a9f-b83b-ec64b75bb4e9\" (UID: \"40626377-4781-4a9f-b83b-ec64b75bb4e9\") " Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.406174 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f2cv\" (UniqueName: \"kubernetes.io/projected/40626377-4781-4a9f-b83b-ec64b75bb4e9-kube-api-access-7f2cv\") pod \"40626377-4781-4a9f-b83b-ec64b75bb4e9\" (UID: \"40626377-4781-4a9f-b83b-ec64b75bb4e9\") " Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.407241 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40626377-4781-4a9f-b83b-ec64b75bb4e9-bundle" (OuterVolumeSpecName: "bundle") pod "40626377-4781-4a9f-b83b-ec64b75bb4e9" (UID: "40626377-4781-4a9f-b83b-ec64b75bb4e9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.423461 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40626377-4781-4a9f-b83b-ec64b75bb4e9-util" (OuterVolumeSpecName: "util") pod "40626377-4781-4a9f-b83b-ec64b75bb4e9" (UID: "40626377-4781-4a9f-b83b-ec64b75bb4e9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.425242 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40626377-4781-4a9f-b83b-ec64b75bb4e9-kube-api-access-7f2cv" (OuterVolumeSpecName: "kube-api-access-7f2cv") pod "40626377-4781-4a9f-b83b-ec64b75bb4e9" (UID: "40626377-4781-4a9f-b83b-ec64b75bb4e9"). InnerVolumeSpecName "kube-api-access-7f2cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.441240 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t"] Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.507762 4811 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40626377-4781-4a9f-b83b-ec64b75bb4e9-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.507804 4811 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40626377-4781-4a9f-b83b-ec64b75bb4e9-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.507818 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f2cv\" (UniqueName: \"kubernetes.io/projected/40626377-4781-4a9f-b83b-ec64b75bb4e9-kube-api-access-7f2cv\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.706136 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" event={"ID":"bb2dfcce-0915-4f43-9a77-ad32fb713d1c","Type":"ContainerStarted","Data":"d2ff76513e9a4c9b87d88080caeae5a999aafeb12b8008cc3f8054e94da2faa8"} Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.707971 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" event={"ID":"bb2dfcce-0915-4f43-9a77-ad32fb713d1c","Type":"ContainerStarted","Data":"c57447318041c282d87f1f1d4f7ab28408059fca22167c084879796a77e83806"} Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.708301 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" event={"ID":"40626377-4781-4a9f-b83b-ec64b75bb4e9","Type":"ContainerDied","Data":"693392db9901ebde3fb71620d0eee43bd0a18f769986907b4107caee73db4f03"} Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.708332 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="693392db9901ebde3fb71620d0eee43bd0a18f769986907b4107caee73db4f03" Dec 03 00:17:23 crc kubenswrapper[4811]: I1203 00:17:23.708483 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g" Dec 03 00:17:24 crc kubenswrapper[4811]: I1203 00:17:24.716824 4811 generic.go:334] "Generic (PLEG): container finished" podID="bb2dfcce-0915-4f43-9a77-ad32fb713d1c" containerID="d2ff76513e9a4c9b87d88080caeae5a999aafeb12b8008cc3f8054e94da2faa8" exitCode=0 Dec 03 00:17:24 crc kubenswrapper[4811]: I1203 00:17:24.716887 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" event={"ID":"bb2dfcce-0915-4f43-9a77-ad32fb713d1c","Type":"ContainerDied","Data":"d2ff76513e9a4c9b87d88080caeae5a999aafeb12b8008cc3f8054e94da2faa8"} Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.155931 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-9t8rz"] Dec 03 00:17:28 crc kubenswrapper[4811]: E1203 00:17:28.156493 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40626377-4781-4a9f-b83b-ec64b75bb4e9" containerName="extract" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.156509 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="40626377-4781-4a9f-b83b-ec64b75bb4e9" containerName="extract" Dec 03 00:17:28 crc kubenswrapper[4811]: E1203 00:17:28.156524 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40626377-4781-4a9f-b83b-ec64b75bb4e9" containerName="util" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.156533 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="40626377-4781-4a9f-b83b-ec64b75bb4e9" containerName="util" Dec 03 00:17:28 crc kubenswrapper[4811]: E1203 00:17:28.156551 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40626377-4781-4a9f-b83b-ec64b75bb4e9" containerName="pull" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.156561 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="40626377-4781-4a9f-b83b-ec64b75bb4e9" containerName="pull" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.156687 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="40626377-4781-4a9f-b83b-ec64b75bb4e9" containerName="extract" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.157078 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-9t8rz"] Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.157173 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9t8rz" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.165321 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.165567 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-mfmp4" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.166206 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.233008 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-p7x74"] Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.234362 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-p7x74" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.238074 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-pmqhn" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.238273 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.258762 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-hlkk5"] Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.259773 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-hlkk5" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.277618 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-p7x74"] Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.285510 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-hlkk5"] Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.290893 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfvg7\" (UniqueName: \"kubernetes.io/projected/317f3f8c-58bf-420e-952e-8888d2b3fcf3-kube-api-access-rfvg7\") pod \"obo-prometheus-operator-668cf9dfbb-9t8rz\" (UID: \"317f3f8c-58bf-420e-952e-8888d2b3fcf3\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9t8rz" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.291117 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/090985bf-75aa-4307-a8b1-2c58e2746bf7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685b74c997-p7x74\" (UID: \"090985bf-75aa-4307-a8b1-2c58e2746bf7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-p7x74" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.291223 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/090985bf-75aa-4307-a8b1-2c58e2746bf7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685b74c997-p7x74\" (UID: \"090985bf-75aa-4307-a8b1-2c58e2746bf7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-p7x74" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.399889 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/090985bf-75aa-4307-a8b1-2c58e2746bf7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685b74c997-p7x74\" (UID: \"090985bf-75aa-4307-a8b1-2c58e2746bf7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-p7x74" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.399963 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/090985bf-75aa-4307-a8b1-2c58e2746bf7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685b74c997-p7x74\" (UID: \"090985bf-75aa-4307-a8b1-2c58e2746bf7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-p7x74" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.399989 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec1a629d-26a7-4438-9134-d4e094ea8e99-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685b74c997-hlkk5\" (UID: \"ec1a629d-26a7-4438-9134-d4e094ea8e99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-hlkk5" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.400047 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfvg7\" (UniqueName: \"kubernetes.io/projected/317f3f8c-58bf-420e-952e-8888d2b3fcf3-kube-api-access-rfvg7\") pod \"obo-prometheus-operator-668cf9dfbb-9t8rz\" (UID: \"317f3f8c-58bf-420e-952e-8888d2b3fcf3\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9t8rz" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.400073 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec1a629d-26a7-4438-9134-d4e094ea8e99-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685b74c997-hlkk5\" (UID: \"ec1a629d-26a7-4438-9134-d4e094ea8e99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-hlkk5" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.410108 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/090985bf-75aa-4307-a8b1-2c58e2746bf7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685b74c997-p7x74\" (UID: \"090985bf-75aa-4307-a8b1-2c58e2746bf7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-p7x74" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.410896 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/090985bf-75aa-4307-a8b1-2c58e2746bf7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685b74c997-p7x74\" (UID: \"090985bf-75aa-4307-a8b1-2c58e2746bf7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-p7x74" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.412485 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-zl9h4"] Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.413293 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-zl9h4" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.416788 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-6q7q2" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.417052 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.431805 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfvg7\" (UniqueName: \"kubernetes.io/projected/317f3f8c-58bf-420e-952e-8888d2b3fcf3-kube-api-access-rfvg7\") pod \"obo-prometheus-operator-668cf9dfbb-9t8rz\" (UID: \"317f3f8c-58bf-420e-952e-8888d2b3fcf3\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9t8rz" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.436962 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-zl9h4"] Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.490512 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9t8rz" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.502574 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec1a629d-26a7-4438-9134-d4e094ea8e99-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685b74c997-hlkk5\" (UID: \"ec1a629d-26a7-4438-9134-d4e094ea8e99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-hlkk5" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.502663 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/3292536f-4a34-490a-a0e1-15241a0637a8-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-zl9h4\" (UID: \"3292536f-4a34-490a-a0e1-15241a0637a8\") " pod="openshift-operators/observability-operator-d8bb48f5d-zl9h4" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.502731 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec1a629d-26a7-4438-9134-d4e094ea8e99-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685b74c997-hlkk5\" (UID: \"ec1a629d-26a7-4438-9134-d4e094ea8e99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-hlkk5" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.502795 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6vrw\" (UniqueName: \"kubernetes.io/projected/3292536f-4a34-490a-a0e1-15241a0637a8-kube-api-access-z6vrw\") pod \"observability-operator-d8bb48f5d-zl9h4\" (UID: \"3292536f-4a34-490a-a0e1-15241a0637a8\") " pod="openshift-operators/observability-operator-d8bb48f5d-zl9h4" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.507522 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec1a629d-26a7-4438-9134-d4e094ea8e99-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685b74c997-hlkk5\" (UID: \"ec1a629d-26a7-4438-9134-d4e094ea8e99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-hlkk5" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.526731 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec1a629d-26a7-4438-9134-d4e094ea8e99-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685b74c997-hlkk5\" (UID: \"ec1a629d-26a7-4438-9134-d4e094ea8e99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-hlkk5" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.557513 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-p7x74" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.586685 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-hlkk5" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.604599 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/3292536f-4a34-490a-a0e1-15241a0637a8-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-zl9h4\" (UID: \"3292536f-4a34-490a-a0e1-15241a0637a8\") " pod="openshift-operators/observability-operator-d8bb48f5d-zl9h4" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.604722 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6vrw\" (UniqueName: \"kubernetes.io/projected/3292536f-4a34-490a-a0e1-15241a0637a8-kube-api-access-z6vrw\") pod \"observability-operator-d8bb48f5d-zl9h4\" (UID: \"3292536f-4a34-490a-a0e1-15241a0637a8\") " pod="openshift-operators/observability-operator-d8bb48f5d-zl9h4" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.612191 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/3292536f-4a34-490a-a0e1-15241a0637a8-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-zl9h4\" (UID: \"3292536f-4a34-490a-a0e1-15241a0637a8\") " pod="openshift-operators/observability-operator-d8bb48f5d-zl9h4" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.640199 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6vrw\" (UniqueName: \"kubernetes.io/projected/3292536f-4a34-490a-a0e1-15241a0637a8-kube-api-access-z6vrw\") pod \"observability-operator-d8bb48f5d-zl9h4\" (UID: \"3292536f-4a34-490a-a0e1-15241a0637a8\") " pod="openshift-operators/observability-operator-d8bb48f5d-zl9h4" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.649605 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-rlj4w"] Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.650393 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rlj4w" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.654616 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-gzmjx" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.685185 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-rlj4w"] Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.706206 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chwrh\" (UniqueName: \"kubernetes.io/projected/69797472-4676-4805-be9a-41c75c0275b2-kube-api-access-chwrh\") pod \"perses-operator-5446b9c989-rlj4w\" (UID: \"69797472-4676-4805-be9a-41c75c0275b2\") " pod="openshift-operators/perses-operator-5446b9c989-rlj4w" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.706302 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/69797472-4676-4805-be9a-41c75c0275b2-openshift-service-ca\") pod \"perses-operator-5446b9c989-rlj4w\" (UID: \"69797472-4676-4805-be9a-41c75c0275b2\") " pod="openshift-operators/perses-operator-5446b9c989-rlj4w" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.778513 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-zl9h4" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.807294 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chwrh\" (UniqueName: \"kubernetes.io/projected/69797472-4676-4805-be9a-41c75c0275b2-kube-api-access-chwrh\") pod \"perses-operator-5446b9c989-rlj4w\" (UID: \"69797472-4676-4805-be9a-41c75c0275b2\") " pod="openshift-operators/perses-operator-5446b9c989-rlj4w" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.807457 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/69797472-4676-4805-be9a-41c75c0275b2-openshift-service-ca\") pod \"perses-operator-5446b9c989-rlj4w\" (UID: \"69797472-4676-4805-be9a-41c75c0275b2\") " pod="openshift-operators/perses-operator-5446b9c989-rlj4w" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.808652 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/69797472-4676-4805-be9a-41c75c0275b2-openshift-service-ca\") pod \"perses-operator-5446b9c989-rlj4w\" (UID: \"69797472-4676-4805-be9a-41c75c0275b2\") " pod="openshift-operators/perses-operator-5446b9c989-rlj4w" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.826856 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chwrh\" (UniqueName: \"kubernetes.io/projected/69797472-4676-4805-be9a-41c75c0275b2-kube-api-access-chwrh\") pod \"perses-operator-5446b9c989-rlj4w\" (UID: \"69797472-4676-4805-be9a-41c75c0275b2\") " pod="openshift-operators/perses-operator-5446b9c989-rlj4w" Dec 03 00:17:28 crc kubenswrapper[4811]: I1203 00:17:28.970517 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-rlj4w" Dec 03 00:17:29 crc kubenswrapper[4811]: I1203 00:17:29.533812 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-p7x74"] Dec 03 00:17:29 crc kubenswrapper[4811]: W1203 00:17:29.553351 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod090985bf_75aa_4307_a8b1_2c58e2746bf7.slice/crio-81b1ea1278f6f71e38aa9e7e7cb2ba501f83fe9d08883143196f72a5e83f22c5 WatchSource:0}: Error finding container 81b1ea1278f6f71e38aa9e7e7cb2ba501f83fe9d08883143196f72a5e83f22c5: Status 404 returned error can't find the container with id 81b1ea1278f6f71e38aa9e7e7cb2ba501f83fe9d08883143196f72a5e83f22c5 Dec 03 00:17:29 crc kubenswrapper[4811]: I1203 00:17:29.564763 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-hlkk5"] Dec 03 00:17:29 crc kubenswrapper[4811]: I1203 00:17:29.622599 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-zl9h4"] Dec 03 00:17:29 crc kubenswrapper[4811]: W1203 00:17:29.632155 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3292536f_4a34_490a_a0e1_15241a0637a8.slice/crio-fc5ff427fa74b663ea8ae8cf567f83e3dbe822c40c14f820f4bea8977250b2de WatchSource:0}: Error finding container fc5ff427fa74b663ea8ae8cf567f83e3dbe822c40c14f820f4bea8977250b2de: Status 404 returned error can't find the container with id fc5ff427fa74b663ea8ae8cf567f83e3dbe822c40c14f820f4bea8977250b2de Dec 03 00:17:29 crc kubenswrapper[4811]: I1203 00:17:29.683539 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-9t8rz"] Dec 03 00:17:29 crc kubenswrapper[4811]: W1203 00:17:29.697174 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod317f3f8c_58bf_420e_952e_8888d2b3fcf3.slice/crio-7ec7b77573a8c8e72689f701841fec3f467fa872d34b3fc1ac88ca00cb3ec026 WatchSource:0}: Error finding container 7ec7b77573a8c8e72689f701841fec3f467fa872d34b3fc1ac88ca00cb3ec026: Status 404 returned error can't find the container with id 7ec7b77573a8c8e72689f701841fec3f467fa872d34b3fc1ac88ca00cb3ec026 Dec 03 00:17:29 crc kubenswrapper[4811]: I1203 00:17:29.768776 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-zl9h4" event={"ID":"3292536f-4a34-490a-a0e1-15241a0637a8","Type":"ContainerStarted","Data":"fc5ff427fa74b663ea8ae8cf567f83e3dbe822c40c14f820f4bea8977250b2de"} Dec 03 00:17:29 crc kubenswrapper[4811]: I1203 00:17:29.771308 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-hlkk5" event={"ID":"ec1a629d-26a7-4438-9134-d4e094ea8e99","Type":"ContainerStarted","Data":"ddd0de164e4a5b534feb42b28906e7efe21694aaf14046f10c24150fa0adfbce"} Dec 03 00:17:29 crc kubenswrapper[4811]: I1203 00:17:29.773656 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-p7x74" event={"ID":"090985bf-75aa-4307-a8b1-2c58e2746bf7","Type":"ContainerStarted","Data":"81b1ea1278f6f71e38aa9e7e7cb2ba501f83fe9d08883143196f72a5e83f22c5"} Dec 03 00:17:29 crc kubenswrapper[4811]: I1203 00:17:29.774894 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9t8rz" event={"ID":"317f3f8c-58bf-420e-952e-8888d2b3fcf3","Type":"ContainerStarted","Data":"7ec7b77573a8c8e72689f701841fec3f467fa872d34b3fc1ac88ca00cb3ec026"} Dec 03 00:17:29 crc kubenswrapper[4811]: I1203 00:17:29.777837 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" event={"ID":"bb2dfcce-0915-4f43-9a77-ad32fb713d1c","Type":"ContainerStarted","Data":"1c2270e8c9f324e97e9d601c5975ffc6e45307f1727a40f4e3e6fbddc77f25e0"} Dec 03 00:17:29 crc kubenswrapper[4811]: I1203 00:17:29.878119 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-rlj4w"] Dec 03 00:17:29 crc kubenswrapper[4811]: W1203 00:17:29.883493 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69797472_4676_4805_be9a_41c75c0275b2.slice/crio-3098aec096cdb4f678f8d09047196d31a35279adf1a939ee700689182758b271 WatchSource:0}: Error finding container 3098aec096cdb4f678f8d09047196d31a35279adf1a939ee700689182758b271: Status 404 returned error can't find the container with id 3098aec096cdb4f678f8d09047196d31a35279adf1a939ee700689182758b271 Dec 03 00:17:30 crc kubenswrapper[4811]: I1203 00:17:30.789310 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-rlj4w" event={"ID":"69797472-4676-4805-be9a-41c75c0275b2","Type":"ContainerStarted","Data":"3098aec096cdb4f678f8d09047196d31a35279adf1a939ee700689182758b271"} Dec 03 00:17:30 crc kubenswrapper[4811]: I1203 00:17:30.794859 4811 generic.go:334] "Generic (PLEG): container finished" podID="bb2dfcce-0915-4f43-9a77-ad32fb713d1c" containerID="1c2270e8c9f324e97e9d601c5975ffc6e45307f1727a40f4e3e6fbddc77f25e0" exitCode=0 Dec 03 00:17:30 crc kubenswrapper[4811]: I1203 00:17:30.794901 4811 generic.go:334] "Generic (PLEG): container finished" podID="bb2dfcce-0915-4f43-9a77-ad32fb713d1c" containerID="abc4e7f370ccbd61b236f3b63df93e1cb4bb40b9ae420f9a2505e2005f339e8e" exitCode=0 Dec 03 00:17:30 crc kubenswrapper[4811]: I1203 00:17:30.794925 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" event={"ID":"bb2dfcce-0915-4f43-9a77-ad32fb713d1c","Type":"ContainerDied","Data":"1c2270e8c9f324e97e9d601c5975ffc6e45307f1727a40f4e3e6fbddc77f25e0"} Dec 03 00:17:30 crc kubenswrapper[4811]: I1203 00:17:30.794954 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" event={"ID":"bb2dfcce-0915-4f43-9a77-ad32fb713d1c","Type":"ContainerDied","Data":"abc4e7f370ccbd61b236f3b63df93e1cb4bb40b9ae420f9a2505e2005f339e8e"} Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.201059 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-hn5zj"] Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.202425 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-hn5zj" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.206813 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-c42wd" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.225284 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-hn5zj"] Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.225642 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.226139 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.368840 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mhk9\" (UniqueName: \"kubernetes.io/projected/7e3c65e0-ad70-4e72-8c9c-f4b623946fba-kube-api-access-2mhk9\") pod \"interconnect-operator-5bb49f789d-hn5zj\" (UID: \"7e3c65e0-ad70-4e72-8c9c-f4b623946fba\") " pod="service-telemetry/interconnect-operator-5bb49f789d-hn5zj" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.405348 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.470366 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mhk9\" (UniqueName: \"kubernetes.io/projected/7e3c65e0-ad70-4e72-8c9c-f4b623946fba-kube-api-access-2mhk9\") pod \"interconnect-operator-5bb49f789d-hn5zj\" (UID: \"7e3c65e0-ad70-4e72-8c9c-f4b623946fba\") " pod="service-telemetry/interconnect-operator-5bb49f789d-hn5zj" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.518465 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mhk9\" (UniqueName: \"kubernetes.io/projected/7e3c65e0-ad70-4e72-8c9c-f4b623946fba-kube-api-access-2mhk9\") pod \"interconnect-operator-5bb49f789d-hn5zj\" (UID: \"7e3c65e0-ad70-4e72-8c9c-f4b623946fba\") " pod="service-telemetry/interconnect-operator-5bb49f789d-hn5zj" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.566847 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-hn5zj" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.572120 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4w7b\" (UniqueName: \"kubernetes.io/projected/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-kube-api-access-l4w7b\") pod \"bb2dfcce-0915-4f43-9a77-ad32fb713d1c\" (UID: \"bb2dfcce-0915-4f43-9a77-ad32fb713d1c\") " Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.572212 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-bundle\") pod \"bb2dfcce-0915-4f43-9a77-ad32fb713d1c\" (UID: \"bb2dfcce-0915-4f43-9a77-ad32fb713d1c\") " Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.572335 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-util\") pod \"bb2dfcce-0915-4f43-9a77-ad32fb713d1c\" (UID: \"bb2dfcce-0915-4f43-9a77-ad32fb713d1c\") " Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.577828 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-bundle" (OuterVolumeSpecName: "bundle") pod "bb2dfcce-0915-4f43-9a77-ad32fb713d1c" (UID: "bb2dfcce-0915-4f43-9a77-ad32fb713d1c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.600513 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-kube-api-access-l4w7b" (OuterVolumeSpecName: "kube-api-access-l4w7b") pod "bb2dfcce-0915-4f43-9a77-ad32fb713d1c" (UID: "bb2dfcce-0915-4f43-9a77-ad32fb713d1c"). InnerVolumeSpecName "kube-api-access-l4w7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.600621 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-util" (OuterVolumeSpecName: "util") pod "bb2dfcce-0915-4f43-9a77-ad32fb713d1c" (UID: "bb2dfcce-0915-4f43-9a77-ad32fb713d1c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.674223 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4w7b\" (UniqueName: \"kubernetes.io/projected/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-kube-api-access-l4w7b\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.674303 4811 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.674317 4811 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb2dfcce-0915-4f43-9a77-ad32fb713d1c-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.821486 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" event={"ID":"bb2dfcce-0915-4f43-9a77-ad32fb713d1c","Type":"ContainerDied","Data":"c57447318041c282d87f1f1d4f7ab28408059fca22167c084879796a77e83806"} Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.821533 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t" Dec 03 00:17:32 crc kubenswrapper[4811]: I1203 00:17:32.821547 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c57447318041c282d87f1f1d4f7ab28408059fca22167c084879796a77e83806" Dec 03 00:17:33 crc kubenswrapper[4811]: I1203 00:17:33.517366 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-hn5zj"] Dec 03 00:17:33 crc kubenswrapper[4811]: W1203 00:17:33.593051 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e3c65e0_ad70_4e72_8c9c_f4b623946fba.slice/crio-4ef3490a66848de2ff08b6b4e914f048fc23ea3629760df8a2b50285d48d1445 WatchSource:0}: Error finding container 4ef3490a66848de2ff08b6b4e914f048fc23ea3629760df8a2b50285d48d1445: Status 404 returned error can't find the container with id 4ef3490a66848de2ff08b6b4e914f048fc23ea3629760df8a2b50285d48d1445 Dec 03 00:17:33 crc kubenswrapper[4811]: I1203 00:17:33.832879 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-hn5zj" event={"ID":"7e3c65e0-ad70-4e72-8c9c-f4b623946fba","Type":"ContainerStarted","Data":"4ef3490a66848de2ff08b6b4e914f048fc23ea3629760df8a2b50285d48d1445"} Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.023701 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-95b4c45d5-fccrp"] Dec 03 00:17:36 crc kubenswrapper[4811]: E1203 00:17:36.024232 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb2dfcce-0915-4f43-9a77-ad32fb713d1c" containerName="util" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.024246 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb2dfcce-0915-4f43-9a77-ad32fb713d1c" containerName="util" Dec 03 00:17:36 crc kubenswrapper[4811]: E1203 00:17:36.024688 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb2dfcce-0915-4f43-9a77-ad32fb713d1c" containerName="pull" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.024697 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb2dfcce-0915-4f43-9a77-ad32fb713d1c" containerName="pull" Dec 03 00:17:36 crc kubenswrapper[4811]: E1203 00:17:36.024706 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb2dfcce-0915-4f43-9a77-ad32fb713d1c" containerName="extract" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.024714 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb2dfcce-0915-4f43-9a77-ad32fb713d1c" containerName="extract" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.024845 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb2dfcce-0915-4f43-9a77-ad32fb713d1c" containerName="extract" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.025320 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-95b4c45d5-fccrp" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.032768 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-6885f" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.033031 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.050233 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-95b4c45d5-fccrp"] Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.062530 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsb79\" (UniqueName: \"kubernetes.io/projected/1b2c49bf-e3a4-49d8-aa19-d40664406435-kube-api-access-gsb79\") pod \"elastic-operator-95b4c45d5-fccrp\" (UID: \"1b2c49bf-e3a4-49d8-aa19-d40664406435\") " pod="service-telemetry/elastic-operator-95b4c45d5-fccrp" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.062611 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b2c49bf-e3a4-49d8-aa19-d40664406435-apiservice-cert\") pod \"elastic-operator-95b4c45d5-fccrp\" (UID: \"1b2c49bf-e3a4-49d8-aa19-d40664406435\") " pod="service-telemetry/elastic-operator-95b4c45d5-fccrp" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.062645 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b2c49bf-e3a4-49d8-aa19-d40664406435-webhook-cert\") pod \"elastic-operator-95b4c45d5-fccrp\" (UID: \"1b2c49bf-e3a4-49d8-aa19-d40664406435\") " pod="service-telemetry/elastic-operator-95b4c45d5-fccrp" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.163748 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b2c49bf-e3a4-49d8-aa19-d40664406435-webhook-cert\") pod \"elastic-operator-95b4c45d5-fccrp\" (UID: \"1b2c49bf-e3a4-49d8-aa19-d40664406435\") " pod="service-telemetry/elastic-operator-95b4c45d5-fccrp" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.164056 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsb79\" (UniqueName: \"kubernetes.io/projected/1b2c49bf-e3a4-49d8-aa19-d40664406435-kube-api-access-gsb79\") pod \"elastic-operator-95b4c45d5-fccrp\" (UID: \"1b2c49bf-e3a4-49d8-aa19-d40664406435\") " pod="service-telemetry/elastic-operator-95b4c45d5-fccrp" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.164206 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b2c49bf-e3a4-49d8-aa19-d40664406435-apiservice-cert\") pod \"elastic-operator-95b4c45d5-fccrp\" (UID: \"1b2c49bf-e3a4-49d8-aa19-d40664406435\") " pod="service-telemetry/elastic-operator-95b4c45d5-fccrp" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.173561 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b2c49bf-e3a4-49d8-aa19-d40664406435-webhook-cert\") pod \"elastic-operator-95b4c45d5-fccrp\" (UID: \"1b2c49bf-e3a4-49d8-aa19-d40664406435\") " pod="service-telemetry/elastic-operator-95b4c45d5-fccrp" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.184075 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsb79\" (UniqueName: \"kubernetes.io/projected/1b2c49bf-e3a4-49d8-aa19-d40664406435-kube-api-access-gsb79\") pod \"elastic-operator-95b4c45d5-fccrp\" (UID: \"1b2c49bf-e3a4-49d8-aa19-d40664406435\") " pod="service-telemetry/elastic-operator-95b4c45d5-fccrp" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.220185 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b2c49bf-e3a4-49d8-aa19-d40664406435-apiservice-cert\") pod \"elastic-operator-95b4c45d5-fccrp\" (UID: \"1b2c49bf-e3a4-49d8-aa19-d40664406435\") " pod="service-telemetry/elastic-operator-95b4c45d5-fccrp" Dec 03 00:17:36 crc kubenswrapper[4811]: I1203 00:17:36.354007 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-95b4c45d5-fccrp" Dec 03 00:17:37 crc kubenswrapper[4811]: I1203 00:17:37.063905 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-95b4c45d5-fccrp"] Dec 03 00:17:37 crc kubenswrapper[4811]: I1203 00:17:37.889181 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-95b4c45d5-fccrp" event={"ID":"1b2c49bf-e3a4-49d8-aa19-d40664406435","Type":"ContainerStarted","Data":"7788dc48b5b841c0517b40c3a991391911137fa4d8e3b2466077e22806382672"} Dec 03 00:17:51 crc kubenswrapper[4811]: E1203 00:17:51.078855 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb" Dec 03 00:17:51 crc kubenswrapper[4811]: E1203 00:17:51.080148 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb,Command:[],Args:[--namespace=$(NAMESPACE) --images=perses=$(RELATED_IMAGE_PERSES) --images=alertmanager=$(RELATED_IMAGE_ALERTMANAGER) --images=prometheus=$(RELATED_IMAGE_PROMETHEUS) --images=thanos=$(RELATED_IMAGE_THANOS) --images=ui-dashboards=$(RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN) --images=ui-distributed-tracing=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN) --images=ui-distributed-tracing-pf5=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5) --images=ui-distributed-tracing-pf4=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4) --images=ui-logging=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN) --images=ui-logging-pf4=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4) --images=ui-troubleshooting-panel=$(RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN) --images=ui-monitoring=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN) --images=ui-monitoring-pf5=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5) --images=korrel8r=$(RELATED_IMAGE_KORREL8R) --images=health-analyzer=$(RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER) --openshift.enabled=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER,Value:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:e718854a7d6ca8accf0fa72db0eb902e46c44d747ad51dc3f06bba0cefaa3c01,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS,Value:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:17ea20be390a94ab39f5cdd7f0cbc2498046eebcf77fe3dec9aa288d5c2cf46b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_THANOS,Value:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:d972f4faa5e9c121402d23ed85002f26af48ec36b1b71a7489d677b3913d08b4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PERSES,Value:registry.redhat.io/cluster-observability-operator/perses-rhel9@sha256:91531137fc1dcd740e277e0f65e120a0176a16f788c14c27925b61aa0b792ade,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-rhel9@sha256:897e1bfad1187062725b54d87107bd0155972257a50d8335dd29e1999b828a4f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf5-rhel9@sha256:95fe5b5746ca8c07ac9217ce2d8ac8e6afad17af210f9d8e0074df1310b209a8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf4-rhel9@sha256:e9d9a89e4d8126a62b1852055482258ee528cac6398dd5d43ebad75ace0f33c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-rhel9@sha256:ec684a0645ceb917b019af7ddba68c3533416e356ab0d0320a30e75ca7ebb31b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-pf4-rhel9@sha256:3b9693fcde9b3a9494fb04735b1f7cfd0426f10be820fdc3f024175c0d3df1c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/troubleshooting-panel-console-plugin-rhel9@sha256:580606f194180accc8abba099e17a26dca7522ec6d233fa2fdd40312771703e3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-rhel9@sha256:e03777be39e71701935059cd877603874a13ac94daa73219d4e5e545599d78a9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-pf5-rhel9@sha256:aa47256193cfd2877853878e1ae97d2ab8b8e5deae62b387cbfad02b284d379c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KORREL8R,Value:registry.redhat.io/cluster-observability-operator/korrel8r-rhel9@sha256:c595ff56b2cb85514bf4784db6ddb82e4e657e3e708a7fb695fc4997379a94d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER,Value:registry.redhat.io/cluster-observability-operator/cluster-health-analyzer-rhel9@sha256:45a4ec2a519bcec99e886aa91596d5356a2414a2bd103baaef9fa7838c672eb2,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{400 -3} {} 400m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:observability-operator-tls,ReadOnly:true,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6vrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-operator-d8bb48f5d-zl9h4_openshift-operators(3292536f-4a34-490a-a0e1-15241a0637a8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:17:51 crc kubenswrapper[4811]: E1203 00:17:51.082328 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-operator-d8bb48f5d-zl9h4" podUID="3292536f-4a34-490a-a0e1-15241a0637a8" Dec 03 00:17:51 crc kubenswrapper[4811]: E1203 00:17:51.847124 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3" Dec 03 00:17:51 crc kubenswrapper[4811]: E1203 00:17:51.847378 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rfvg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-668cf9dfbb-9t8rz_openshift-operators(317f3f8c-58bf-420e-952e-8888d2b3fcf3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:17:51 crc kubenswrapper[4811]: E1203 00:17:51.848646 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9t8rz" podUID="317f3f8c-58bf-420e-952e-8888d2b3fcf3" Dec 03 00:17:51 crc kubenswrapper[4811]: E1203 00:17:51.997202 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb\\\"\"" pod="openshift-operators/observability-operator-d8bb48f5d-zl9h4" podUID="3292536f-4a34-490a-a0e1-15241a0637a8" Dec 03 00:17:51 crc kubenswrapper[4811]: E1203 00:17:51.997210 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3\\\"\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9t8rz" podUID="317f3f8c-58bf-420e-952e-8888d2b3fcf3" Dec 03 00:17:52 crc kubenswrapper[4811]: E1203 00:17:52.564843 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43" Dec 03 00:17:52 crc kubenswrapper[4811]: E1203 00:17:52.565052 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:interconnect-operator,Image:registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43,Command:[qdr-operator],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:60000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:qdr-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_QDROUTERD_IMAGE,Value:registry.redhat.io/amq7/amq-interconnect@sha256:31d87473fa684178a694f9ee331d3c80f2653f9533cb65c2a325752166a077e9,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:amq7-interconnect-operator.v1.10.20,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2mhk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod interconnect-operator-5bb49f789d-hn5zj_service-telemetry(7e3c65e0-ad70-4e72-8c9c-f4b623946fba): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:17:52 crc kubenswrapper[4811]: E1203 00:17:52.575316 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/interconnect-operator-5bb49f789d-hn5zj" podUID="7e3c65e0-ad70-4e72-8c9c-f4b623946fba" Dec 03 00:17:53 crc kubenswrapper[4811]: E1203 00:17:53.001431 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43\\\"\"" pod="service-telemetry/interconnect-operator-5bb49f789d-hn5zj" podUID="7e3c65e0-ad70-4e72-8c9c-f4b623946fba" Dec 03 00:17:54 crc kubenswrapper[4811]: I1203 00:17:54.508388 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-p4rdw"] Dec 03 00:17:54 crc kubenswrapper[4811]: I1203 00:17:54.509910 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-p4rdw" Dec 03 00:17:54 crc kubenswrapper[4811]: I1203 00:17:54.513421 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 03 00:17:54 crc kubenswrapper[4811]: I1203 00:17:54.513489 4811 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-lbd2v" Dec 03 00:17:54 crc kubenswrapper[4811]: I1203 00:17:54.514786 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 03 00:17:54 crc kubenswrapper[4811]: I1203 00:17:54.587985 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-p4rdw"] Dec 03 00:17:54 crc kubenswrapper[4811]: I1203 00:17:54.664548 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b4vg\" (UniqueName: \"kubernetes.io/projected/9cc83478-aa21-40fc-8cc5-de515b6e1b10-kube-api-access-8b4vg\") pod \"cert-manager-operator-controller-manager-5446d6888b-p4rdw\" (UID: \"9cc83478-aa21-40fc-8cc5-de515b6e1b10\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-p4rdw" Dec 03 00:17:54 crc kubenswrapper[4811]: I1203 00:17:54.664607 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9cc83478-aa21-40fc-8cc5-de515b6e1b10-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-p4rdw\" (UID: \"9cc83478-aa21-40fc-8cc5-de515b6e1b10\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-p4rdw" Dec 03 00:17:54 crc kubenswrapper[4811]: I1203 00:17:54.765661 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b4vg\" (UniqueName: \"kubernetes.io/projected/9cc83478-aa21-40fc-8cc5-de515b6e1b10-kube-api-access-8b4vg\") pod \"cert-manager-operator-controller-manager-5446d6888b-p4rdw\" (UID: \"9cc83478-aa21-40fc-8cc5-de515b6e1b10\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-p4rdw" Dec 03 00:17:54 crc kubenswrapper[4811]: I1203 00:17:54.765724 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9cc83478-aa21-40fc-8cc5-de515b6e1b10-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-p4rdw\" (UID: \"9cc83478-aa21-40fc-8cc5-de515b6e1b10\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-p4rdw" Dec 03 00:17:54 crc kubenswrapper[4811]: I1203 00:17:54.766373 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9cc83478-aa21-40fc-8cc5-de515b6e1b10-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-p4rdw\" (UID: \"9cc83478-aa21-40fc-8cc5-de515b6e1b10\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-p4rdw" Dec 03 00:17:54 crc kubenswrapper[4811]: I1203 00:17:54.798024 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b4vg\" (UniqueName: \"kubernetes.io/projected/9cc83478-aa21-40fc-8cc5-de515b6e1b10-kube-api-access-8b4vg\") pod \"cert-manager-operator-controller-manager-5446d6888b-p4rdw\" (UID: \"9cc83478-aa21-40fc-8cc5-de515b6e1b10\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-p4rdw" Dec 03 00:17:54 crc kubenswrapper[4811]: I1203 00:17:54.825532 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-p4rdw" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.026832 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-rlj4w" event={"ID":"69797472-4676-4805-be9a-41c75c0275b2","Type":"ContainerStarted","Data":"4ec33cab22e58fca73d6a1ee3965ec2c218c69337213882a60225b5c14ebeebb"} Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.029493 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-rlj4w" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.033521 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-hlkk5" event={"ID":"ec1a629d-26a7-4438-9134-d4e094ea8e99","Type":"ContainerStarted","Data":"8d54b1655bf01ecf888210da16bd636576beb2ae2cbde636d8c5744858457384"} Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.041236 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-95b4c45d5-fccrp" event={"ID":"1b2c49bf-e3a4-49d8-aa19-d40664406435","Type":"ContainerStarted","Data":"b2df19e3a8e9c2b66e094d5d9bf55d66b0487c1216836e739a6cbb8a1ce11d9a"} Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.047297 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-p7x74" event={"ID":"090985bf-75aa-4307-a8b1-2c58e2746bf7","Type":"ContainerStarted","Data":"6ada1cee6ae5e780dc419764805d6657c4c3291bdb47aaa2fe780c28b5d23e6e"} Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.053565 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-rlj4w" podStartSLOduration=3.10926324 podStartE2EDuration="27.053541611s" podCreationTimestamp="2025-12-03 00:17:28 +0000 UTC" firstStartedPulling="2025-12-03 00:17:29.887024173 +0000 UTC m=+690.028853645" lastFinishedPulling="2025-12-03 00:17:53.831302544 +0000 UTC m=+713.973132016" observedRunningTime="2025-12-03 00:17:55.051180693 +0000 UTC m=+715.193010165" watchObservedRunningTime="2025-12-03 00:17:55.053541611 +0000 UTC m=+715.195371083" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.081094 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-hlkk5" podStartSLOduration=2.842273092 podStartE2EDuration="27.081069554s" podCreationTimestamp="2025-12-03 00:17:28 +0000 UTC" firstStartedPulling="2025-12-03 00:17:29.586944834 +0000 UTC m=+689.728774306" lastFinishedPulling="2025-12-03 00:17:53.825741296 +0000 UTC m=+713.967570768" observedRunningTime="2025-12-03 00:17:55.076570093 +0000 UTC m=+715.218399565" watchObservedRunningTime="2025-12-03 00:17:55.081069554 +0000 UTC m=+715.222899026" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.140343 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685b74c997-p7x74" podStartSLOduration=2.86755936 podStartE2EDuration="27.140318595s" podCreationTimestamp="2025-12-03 00:17:28 +0000 UTC" firstStartedPulling="2025-12-03 00:17:29.557293138 +0000 UTC m=+689.699122610" lastFinishedPulling="2025-12-03 00:17:53.830052363 +0000 UTC m=+713.971881845" observedRunningTime="2025-12-03 00:17:55.130810679 +0000 UTC m=+715.272640151" watchObservedRunningTime="2025-12-03 00:17:55.140318595 +0000 UTC m=+715.282148067" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.177451 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-95b4c45d5-fccrp" podStartSLOduration=2.361387735 podStartE2EDuration="19.177412826s" podCreationTimestamp="2025-12-03 00:17:36 +0000 UTC" firstStartedPulling="2025-12-03 00:17:37.102483617 +0000 UTC m=+697.244313089" lastFinishedPulling="2025-12-03 00:17:53.918508708 +0000 UTC m=+714.060338180" observedRunningTime="2025-12-03 00:17:55.170554855 +0000 UTC m=+715.312384327" watchObservedRunningTime="2025-12-03 00:17:55.177412826 +0000 UTC m=+715.319242298" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.289589 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-p4rdw"] Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.311566 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.313548 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.318766 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.318814 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.319039 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.319096 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.319132 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.319230 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.319488 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.319684 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-hclv9" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.320216 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.355143 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.381164 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.381213 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.381244 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.381313 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.381484 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.381581 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.381611 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.381699 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.381797 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.381827 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/de7ccb94-f641-49de-b976-5171b761c8bd-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.381853 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.381920 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.381971 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.381993 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.382015 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.483998 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.484073 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.484152 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.484194 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.484221 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.484277 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.484312 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.484356 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/de7ccb94-f641-49de-b976-5171b761c8bd-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.484387 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.484417 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.484446 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.484480 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.484577 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.484825 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.484863 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.485005 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.485083 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.485216 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.485226 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.485626 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.485672 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/de7ccb94-f641-49de-b976-5171b761c8bd-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.486580 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.487596 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.490058 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.490172 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.490452 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.490680 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.490803 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.490980 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/de7ccb94-f641-49de-b976-5171b761c8bd-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.501126 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/de7ccb94-f641-49de-b976-5171b761c8bd-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"de7ccb94-f641-49de-b976-5171b761c8bd\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.648621 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:17:55 crc kubenswrapper[4811]: I1203 00:17:55.901486 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 03 00:17:56 crc kubenswrapper[4811]: I1203 00:17:56.055111 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-p4rdw" event={"ID":"9cc83478-aa21-40fc-8cc5-de515b6e1b10","Type":"ContainerStarted","Data":"46eb376c4391edc5a25ed3b21de422038f818ff8933a6ada4a3324bd8d1070ee"} Dec 03 00:17:56 crc kubenswrapper[4811]: I1203 00:17:56.056952 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"de7ccb94-f641-49de-b976-5171b761c8bd","Type":"ContainerStarted","Data":"3f1fa5f2f94429fd2eae4fe8c04fff2ba60b202015ea4d285e18333501b70123"} Dec 03 00:18:01 crc kubenswrapper[4811]: I1203 00:18:01.096787 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-p4rdw" event={"ID":"9cc83478-aa21-40fc-8cc5-de515b6e1b10","Type":"ContainerStarted","Data":"0be98a53f10fe18278911e0dd22d28e101ef59ef2e3a41944ee53a495932786c"} Dec 03 00:18:01 crc kubenswrapper[4811]: I1203 00:18:01.126201 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-p4rdw" podStartSLOduration=2.479910875 podStartE2EDuration="7.126181141s" podCreationTimestamp="2025-12-03 00:17:54 +0000 UTC" firstStartedPulling="2025-12-03 00:17:55.314416746 +0000 UTC m=+715.456246218" lastFinishedPulling="2025-12-03 00:17:59.960687002 +0000 UTC m=+720.102516484" observedRunningTime="2025-12-03 00:18:01.123152566 +0000 UTC m=+721.264982058" watchObservedRunningTime="2025-12-03 00:18:01.126181141 +0000 UTC m=+721.268010613" Dec 03 00:18:04 crc kubenswrapper[4811]: I1203 00:18:04.486463 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-s74g6"] Dec 03 00:18:04 crc kubenswrapper[4811]: I1203 00:18:04.488472 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-s74g6" Dec 03 00:18:04 crc kubenswrapper[4811]: I1203 00:18:04.494121 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 03 00:18:04 crc kubenswrapper[4811]: I1203 00:18:04.494346 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 03 00:18:04 crc kubenswrapper[4811]: I1203 00:18:04.503375 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-s74g6"] Dec 03 00:18:04 crc kubenswrapper[4811]: I1203 00:18:04.507873 4811 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-nxk8f" Dec 03 00:18:04 crc kubenswrapper[4811]: I1203 00:18:04.558210 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32ef9f91-4356-47ed-9963-e18cc68f4771-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-s74g6\" (UID: \"32ef9f91-4356-47ed-9963-e18cc68f4771\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-s74g6" Dec 03 00:18:04 crc kubenswrapper[4811]: I1203 00:18:04.558374 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg2kk\" (UniqueName: \"kubernetes.io/projected/32ef9f91-4356-47ed-9963-e18cc68f4771-kube-api-access-hg2kk\") pod \"cert-manager-webhook-f4fb5df64-s74g6\" (UID: \"32ef9f91-4356-47ed-9963-e18cc68f4771\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-s74g6" Dec 03 00:18:04 crc kubenswrapper[4811]: I1203 00:18:04.659801 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32ef9f91-4356-47ed-9963-e18cc68f4771-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-s74g6\" (UID: \"32ef9f91-4356-47ed-9963-e18cc68f4771\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-s74g6" Dec 03 00:18:04 crc kubenswrapper[4811]: I1203 00:18:04.660118 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg2kk\" (UniqueName: \"kubernetes.io/projected/32ef9f91-4356-47ed-9963-e18cc68f4771-kube-api-access-hg2kk\") pod \"cert-manager-webhook-f4fb5df64-s74g6\" (UID: \"32ef9f91-4356-47ed-9963-e18cc68f4771\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-s74g6" Dec 03 00:18:04 crc kubenswrapper[4811]: I1203 00:18:04.686165 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg2kk\" (UniqueName: \"kubernetes.io/projected/32ef9f91-4356-47ed-9963-e18cc68f4771-kube-api-access-hg2kk\") pod \"cert-manager-webhook-f4fb5df64-s74g6\" (UID: \"32ef9f91-4356-47ed-9963-e18cc68f4771\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-s74g6" Dec 03 00:18:04 crc kubenswrapper[4811]: I1203 00:18:04.690279 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32ef9f91-4356-47ed-9963-e18cc68f4771-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-s74g6\" (UID: \"32ef9f91-4356-47ed-9963-e18cc68f4771\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-s74g6" Dec 03 00:18:04 crc kubenswrapper[4811]: I1203 00:18:04.835185 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-s74g6" Dec 03 00:18:06 crc kubenswrapper[4811]: I1203 00:18:06.543281 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-5czlr"] Dec 03 00:18:06 crc kubenswrapper[4811]: I1203 00:18:06.544781 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5czlr" Dec 03 00:18:06 crc kubenswrapper[4811]: I1203 00:18:06.548888 4811 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-w5bjb" Dec 03 00:18:06 crc kubenswrapper[4811]: I1203 00:18:06.554169 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-5czlr"] Dec 03 00:18:06 crc kubenswrapper[4811]: I1203 00:18:06.589826 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbk2n\" (UniqueName: \"kubernetes.io/projected/7a8903f1-ed1c-494b-8536-7138bd317a66-kube-api-access-zbk2n\") pod \"cert-manager-cainjector-855d9ccff4-5czlr\" (UID: \"7a8903f1-ed1c-494b-8536-7138bd317a66\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5czlr" Dec 03 00:18:06 crc kubenswrapper[4811]: I1203 00:18:06.589936 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7a8903f1-ed1c-494b-8536-7138bd317a66-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-5czlr\" (UID: \"7a8903f1-ed1c-494b-8536-7138bd317a66\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5czlr" Dec 03 00:18:06 crc kubenswrapper[4811]: I1203 00:18:06.690952 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7a8903f1-ed1c-494b-8536-7138bd317a66-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-5czlr\" (UID: \"7a8903f1-ed1c-494b-8536-7138bd317a66\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5czlr" Dec 03 00:18:06 crc kubenswrapper[4811]: I1203 00:18:06.691032 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbk2n\" (UniqueName: \"kubernetes.io/projected/7a8903f1-ed1c-494b-8536-7138bd317a66-kube-api-access-zbk2n\") pod \"cert-manager-cainjector-855d9ccff4-5czlr\" (UID: \"7a8903f1-ed1c-494b-8536-7138bd317a66\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5czlr" Dec 03 00:18:06 crc kubenswrapper[4811]: I1203 00:18:06.713326 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7a8903f1-ed1c-494b-8536-7138bd317a66-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-5czlr\" (UID: \"7a8903f1-ed1c-494b-8536-7138bd317a66\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5czlr" Dec 03 00:18:06 crc kubenswrapper[4811]: I1203 00:18:06.716929 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbk2n\" (UniqueName: \"kubernetes.io/projected/7a8903f1-ed1c-494b-8536-7138bd317a66-kube-api-access-zbk2n\") pod \"cert-manager-cainjector-855d9ccff4-5czlr\" (UID: \"7a8903f1-ed1c-494b-8536-7138bd317a66\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5czlr" Dec 03 00:18:06 crc kubenswrapper[4811]: I1203 00:18:06.863709 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5czlr" Dec 03 00:18:08 crc kubenswrapper[4811]: I1203 00:18:08.976931 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-rlj4w" Dec 03 00:18:21 crc kubenswrapper[4811]: I1203 00:18:21.176203 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-s74g6"] Dec 03 00:18:21 crc kubenswrapper[4811]: I1203 00:18:21.250240 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-5czlr"] Dec 03 00:18:21 crc kubenswrapper[4811]: I1203 00:18:21.255020 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-s74g6" event={"ID":"32ef9f91-4356-47ed-9963-e18cc68f4771","Type":"ContainerStarted","Data":"02220b2579b84983e23783ecfc8b85b7a89f0a76dded0e97fdfc69ac928b20be"} Dec 03 00:18:21 crc kubenswrapper[4811]: E1203 00:18:21.330650 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Dec 03 00:18:21 crc kubenswrapper[4811]: E1203 00:18:21.330917 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(de7ccb94-f641-49de-b976-5171b761c8bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 00:18:21 crc kubenswrapper[4811]: E1203 00:18:21.332143 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="de7ccb94-f641-49de-b976-5171b761c8bd" Dec 03 00:18:22 crc kubenswrapper[4811]: I1203 00:18:22.265134 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-hn5zj" event={"ID":"7e3c65e0-ad70-4e72-8c9c-f4b623946fba","Type":"ContainerStarted","Data":"1abe7c47b25d55c8b0d564adde5d114216748903bfa54481002034c334487cf9"} Dec 03 00:18:22 crc kubenswrapper[4811]: I1203 00:18:22.266993 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5czlr" event={"ID":"7a8903f1-ed1c-494b-8536-7138bd317a66","Type":"ContainerStarted","Data":"6214a7ba381d9e68623e89ebc68e096a04a88334efc4831feddab407ef865be6"} Dec 03 00:18:22 crc kubenswrapper[4811]: I1203 00:18:22.272098 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9t8rz" event={"ID":"317f3f8c-58bf-420e-952e-8888d2b3fcf3","Type":"ContainerStarted","Data":"e56dff7c1a711dfe1a56209064ad5c785b74d2a4fa47b8d48151e94267f8b3e3"} Dec 03 00:18:22 crc kubenswrapper[4811]: I1203 00:18:22.276755 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-zl9h4" event={"ID":"3292536f-4a34-490a-a0e1-15241a0637a8","Type":"ContainerStarted","Data":"d551d61e947c85e184aabb021fb56a3dc1a8b6e41dece5c5444c270694cd3da8"} Dec 03 00:18:22 crc kubenswrapper[4811]: I1203 00:18:22.277684 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-zl9h4" Dec 03 00:18:22 crc kubenswrapper[4811]: E1203 00:18:22.277791 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="de7ccb94-f641-49de-b976-5171b761c8bd" Dec 03 00:18:22 crc kubenswrapper[4811]: I1203 00:18:22.281775 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-zl9h4" Dec 03 00:18:22 crc kubenswrapper[4811]: I1203 00:18:22.290447 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-hn5zj" podStartSLOduration=2.67199375 podStartE2EDuration="50.29041813s" podCreationTimestamp="2025-12-03 00:17:32 +0000 UTC" firstStartedPulling="2025-12-03 00:17:33.60959656 +0000 UTC m=+693.751426032" lastFinishedPulling="2025-12-03 00:18:21.22802094 +0000 UTC m=+741.369850412" observedRunningTime="2025-12-03 00:18:22.283967659 +0000 UTC m=+742.425797131" watchObservedRunningTime="2025-12-03 00:18:22.29041813 +0000 UTC m=+742.432247602" Dec 03 00:18:22 crc kubenswrapper[4811]: I1203 00:18:22.325896 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-9t8rz" podStartSLOduration=2.794422165 podStartE2EDuration="54.325870239s" podCreationTimestamp="2025-12-03 00:17:28 +0000 UTC" firstStartedPulling="2025-12-03 00:17:29.702603935 +0000 UTC m=+689.844433407" lastFinishedPulling="2025-12-03 00:18:21.234051989 +0000 UTC m=+741.375881481" observedRunningTime="2025-12-03 00:18:22.315872751 +0000 UTC m=+742.457702243" watchObservedRunningTime="2025-12-03 00:18:22.325870239 +0000 UTC m=+742.467699711" Dec 03 00:18:22 crc kubenswrapper[4811]: I1203 00:18:22.389808 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-zl9h4" podStartSLOduration=2.795731708 podStartE2EDuration="54.389777706s" podCreationTimestamp="2025-12-03 00:17:28 +0000 UTC" firstStartedPulling="2025-12-03 00:17:29.635248793 +0000 UTC m=+689.777078265" lastFinishedPulling="2025-12-03 00:18:21.229294791 +0000 UTC m=+741.371124263" observedRunningTime="2025-12-03 00:18:22.385535151 +0000 UTC m=+742.527364623" watchObservedRunningTime="2025-12-03 00:18:22.389777706 +0000 UTC m=+742.531607178" Dec 03 00:18:22 crc kubenswrapper[4811]: I1203 00:18:22.480541 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 03 00:18:22 crc kubenswrapper[4811]: I1203 00:18:22.509809 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 03 00:18:23 crc kubenswrapper[4811]: I1203 00:18:23.286816 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-pn6dx"] Dec 03 00:18:23 crc kubenswrapper[4811]: I1203 00:18:23.288127 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-pn6dx" Dec 03 00:18:23 crc kubenswrapper[4811]: I1203 00:18:23.291596 4811 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-cqx8n" Dec 03 00:18:23 crc kubenswrapper[4811]: E1203 00:18:23.300349 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="de7ccb94-f641-49de-b976-5171b761c8bd" Dec 03 00:18:23 crc kubenswrapper[4811]: I1203 00:18:23.304122 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-pn6dx"] Dec 03 00:18:23 crc kubenswrapper[4811]: I1203 00:18:23.379587 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb8x7\" (UniqueName: \"kubernetes.io/projected/cab3d105-affe-4145-95c3-f9c4a6f766e3-kube-api-access-gb8x7\") pod \"cert-manager-86cb77c54b-pn6dx\" (UID: \"cab3d105-affe-4145-95c3-f9c4a6f766e3\") " pod="cert-manager/cert-manager-86cb77c54b-pn6dx" Dec 03 00:18:23 crc kubenswrapper[4811]: I1203 00:18:23.379738 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cab3d105-affe-4145-95c3-f9c4a6f766e3-bound-sa-token\") pod \"cert-manager-86cb77c54b-pn6dx\" (UID: \"cab3d105-affe-4145-95c3-f9c4a6f766e3\") " pod="cert-manager/cert-manager-86cb77c54b-pn6dx" Dec 03 00:18:23 crc kubenswrapper[4811]: I1203 00:18:23.481065 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb8x7\" (UniqueName: \"kubernetes.io/projected/cab3d105-affe-4145-95c3-f9c4a6f766e3-kube-api-access-gb8x7\") pod \"cert-manager-86cb77c54b-pn6dx\" (UID: \"cab3d105-affe-4145-95c3-f9c4a6f766e3\") " pod="cert-manager/cert-manager-86cb77c54b-pn6dx" Dec 03 00:18:23 crc kubenswrapper[4811]: I1203 00:18:23.481204 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cab3d105-affe-4145-95c3-f9c4a6f766e3-bound-sa-token\") pod \"cert-manager-86cb77c54b-pn6dx\" (UID: \"cab3d105-affe-4145-95c3-f9c4a6f766e3\") " pod="cert-manager/cert-manager-86cb77c54b-pn6dx" Dec 03 00:18:23 crc kubenswrapper[4811]: I1203 00:18:23.507862 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cab3d105-affe-4145-95c3-f9c4a6f766e3-bound-sa-token\") pod \"cert-manager-86cb77c54b-pn6dx\" (UID: \"cab3d105-affe-4145-95c3-f9c4a6f766e3\") " pod="cert-manager/cert-manager-86cb77c54b-pn6dx" Dec 03 00:18:23 crc kubenswrapper[4811]: I1203 00:18:23.509699 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb8x7\" (UniqueName: \"kubernetes.io/projected/cab3d105-affe-4145-95c3-f9c4a6f766e3-kube-api-access-gb8x7\") pod \"cert-manager-86cb77c54b-pn6dx\" (UID: \"cab3d105-affe-4145-95c3-f9c4a6f766e3\") " pod="cert-manager/cert-manager-86cb77c54b-pn6dx" Dec 03 00:18:23 crc kubenswrapper[4811]: I1203 00:18:23.610696 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-pn6dx" Dec 03 00:18:24 crc kubenswrapper[4811]: I1203 00:18:24.089225 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-pn6dx"] Dec 03 00:18:24 crc kubenswrapper[4811]: I1203 00:18:24.303970 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-pn6dx" event={"ID":"cab3d105-affe-4145-95c3-f9c4a6f766e3","Type":"ContainerStarted","Data":"dec3cd8407cbfe857493690cbb9da4ae27009a78662988606ba104a3e289b64a"} Dec 03 00:18:24 crc kubenswrapper[4811]: E1203 00:18:24.304568 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="de7ccb94-f641-49de-b976-5171b761c8bd" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.161172 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.171982 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.174572 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.174752 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.174977 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-xrr5t" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.184678 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.203847 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.242155 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/2e01132f-5523-466d-8b53-15736f68d4bc-builder-dockercfg-xrr5t-push\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.242211 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.242240 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.242311 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.242332 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e01132f-5523-466d-8b53-15736f68d4bc-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.242352 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.242382 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e01132f-5523-466d-8b53-15736f68d4bc-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.242399 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.242431 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.242449 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.242481 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/2e01132f-5523-466d-8b53-15736f68d4bc-builder-dockercfg-xrr5t-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.242502 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwr7x\" (UniqueName: \"kubernetes.io/projected/2e01132f-5523-466d-8b53-15736f68d4bc-kube-api-access-pwr7x\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.344507 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/2e01132f-5523-466d-8b53-15736f68d4bc-builder-dockercfg-xrr5t-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.344557 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwr7x\" (UniqueName: \"kubernetes.io/projected/2e01132f-5523-466d-8b53-15736f68d4bc-kube-api-access-pwr7x\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.344600 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/2e01132f-5523-466d-8b53-15736f68d4bc-builder-dockercfg-xrr5t-push\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.344622 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.344653 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.344685 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.344704 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e01132f-5523-466d-8b53-15736f68d4bc-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.344725 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.344746 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e01132f-5523-466d-8b53-15736f68d4bc-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.344760 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.344785 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.344801 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.345751 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.346436 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e01132f-5523-466d-8b53-15736f68d4bc-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.346675 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.346853 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.347152 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.347202 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e01132f-5523-466d-8b53-15736f68d4bc-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.347398 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.347791 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.348058 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.356599 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/2e01132f-5523-466d-8b53-15736f68d4bc-builder-dockercfg-xrr5t-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.368925 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/2e01132f-5523-466d-8b53-15736f68d4bc-builder-dockercfg-xrr5t-push\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.370143 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwr7x\" (UniqueName: \"kubernetes.io/projected/2e01132f-5523-466d-8b53-15736f68d4bc-kube-api-access-pwr7x\") pod \"service-telemetry-operator-1-build\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:26 crc kubenswrapper[4811]: I1203 00:18:26.496577 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:27 crc kubenswrapper[4811]: I1203 00:18:27.174122 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:18:27 crc kubenswrapper[4811]: I1203 00:18:27.344445 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"2e01132f-5523-466d-8b53-15736f68d4bc","Type":"ContainerStarted","Data":"45f4043c13026dac129879a06be00469ec2d37ff9eeee11f933a4cbf154fb07f"} Dec 03 00:18:27 crc kubenswrapper[4811]: I1203 00:18:27.408468 4811 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 00:18:34 crc kubenswrapper[4811]: I1203 00:18:34.392649 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-s74g6" event={"ID":"32ef9f91-4356-47ed-9963-e18cc68f4771","Type":"ContainerStarted","Data":"109bad615e04144d702eb75489bf056cd9d88076ffd28011b2dc91a0add09dc1"} Dec 03 00:18:34 crc kubenswrapper[4811]: I1203 00:18:34.393210 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-s74g6" Dec 03 00:18:34 crc kubenswrapper[4811]: I1203 00:18:34.397347 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5czlr" event={"ID":"7a8903f1-ed1c-494b-8536-7138bd317a66","Type":"ContainerStarted","Data":"8de8f8f01ddacd1e6a1ead93533485e72d715235821fd775c845487480aee416"} Dec 03 00:18:34 crc kubenswrapper[4811]: I1203 00:18:34.398894 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-pn6dx" event={"ID":"cab3d105-affe-4145-95c3-f9c4a6f766e3","Type":"ContainerStarted","Data":"b97389e8acc7f1e3c8df4ec1f46ca03fb5ee6724e770bf87653dc77e77e90abb"} Dec 03 00:18:34 crc kubenswrapper[4811]: I1203 00:18:34.412896 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-s74g6" podStartSLOduration=17.854753076 podStartE2EDuration="30.412869332s" podCreationTimestamp="2025-12-03 00:18:04 +0000 UTC" firstStartedPulling="2025-12-03 00:18:21.249624106 +0000 UTC m=+741.391453568" lastFinishedPulling="2025-12-03 00:18:33.807740352 +0000 UTC m=+753.949569824" observedRunningTime="2025-12-03 00:18:34.411712613 +0000 UTC m=+754.553542085" watchObservedRunningTime="2025-12-03 00:18:34.412869332 +0000 UTC m=+754.554698804" Dec 03 00:18:34 crc kubenswrapper[4811]: I1203 00:18:34.439600 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-pn6dx" podStartSLOduration=1.780735462 podStartE2EDuration="11.439568594s" podCreationTimestamp="2025-12-03 00:18:23 +0000 UTC" firstStartedPulling="2025-12-03 00:18:24.107889461 +0000 UTC m=+744.249718933" lastFinishedPulling="2025-12-03 00:18:33.766722593 +0000 UTC m=+753.908552065" observedRunningTime="2025-12-03 00:18:34.434913098 +0000 UTC m=+754.576742570" watchObservedRunningTime="2025-12-03 00:18:34.439568594 +0000 UTC m=+754.581398066" Dec 03 00:18:34 crc kubenswrapper[4811]: I1203 00:18:34.480963 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5czlr" podStartSLOduration=16.014421139 podStartE2EDuration="28.48091178s" podCreationTimestamp="2025-12-03 00:18:06 +0000 UTC" firstStartedPulling="2025-12-03 00:18:21.264809213 +0000 UTC m=+741.406638685" lastFinishedPulling="2025-12-03 00:18:33.731299854 +0000 UTC m=+753.873129326" observedRunningTime="2025-12-03 00:18:34.47931165 +0000 UTC m=+754.621141122" watchObservedRunningTime="2025-12-03 00:18:34.48091178 +0000 UTC m=+754.622741252" Dec 03 00:18:36 crc kubenswrapper[4811]: I1203 00:18:36.531890 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.437594 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"2e01132f-5523-466d-8b53-15736f68d4bc","Type":"ContainerStarted","Data":"c775212d9cd0fce27293f7e0e2d4efaca59a7df708e638f52fa67f12c11b8f87"} Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.437854 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="2e01132f-5523-466d-8b53-15736f68d4bc" containerName="manage-dockerfile" containerID="cri-o://c775212d9cd0fce27293f7e0e2d4efaca59a7df708e638f52fa67f12c11b8f87" gracePeriod=30 Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.676163 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.680226 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.685050 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.685327 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.685770 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.691675 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.779620 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.779679 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.779736 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.779766 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.779809 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/2bb1c8ef-9f2f-4625-a163-65348f5431b5-builder-dockercfg-xrr5t-push\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.779841 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.779862 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.779900 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9xpv\" (UniqueName: \"kubernetes.io/projected/2bb1c8ef-9f2f-4625-a163-65348f5431b5-kube-api-access-h9xpv\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.779928 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2bb1c8ef-9f2f-4625-a163-65348f5431b5-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.779952 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/2bb1c8ef-9f2f-4625-a163-65348f5431b5-builder-dockercfg-xrr5t-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.779980 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2bb1c8ef-9f2f-4625-a163-65348f5431b5-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.780011 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.817234 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_2e01132f-5523-466d-8b53-15736f68d4bc/manage-dockerfile/0.log" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.817339 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.881330 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-container-storage-run\") pod \"2e01132f-5523-466d-8b53-15736f68d4bc\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.881404 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-container-storage-root\") pod \"2e01132f-5523-466d-8b53-15736f68d4bc\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.881522 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwr7x\" (UniqueName: \"kubernetes.io/projected/2e01132f-5523-466d-8b53-15736f68d4bc-kube-api-access-pwr7x\") pod \"2e01132f-5523-466d-8b53-15736f68d4bc\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.881549 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e01132f-5523-466d-8b53-15736f68d4bc-node-pullsecrets\") pod \"2e01132f-5523-466d-8b53-15736f68d4bc\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.881591 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e01132f-5523-466d-8b53-15736f68d4bc-buildcachedir\") pod \"2e01132f-5523-466d-8b53-15736f68d4bc\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.881629 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-build-blob-cache\") pod \"2e01132f-5523-466d-8b53-15736f68d4bc\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.881660 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-buildworkdir\") pod \"2e01132f-5523-466d-8b53-15736f68d4bc\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.881646 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e01132f-5523-466d-8b53-15736f68d4bc-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "2e01132f-5523-466d-8b53-15736f68d4bc" (UID: "2e01132f-5523-466d-8b53-15736f68d4bc"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.881691 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-proxy-ca-bundles\") pod \"2e01132f-5523-466d-8b53-15736f68d4bc\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.881768 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/2e01132f-5523-466d-8b53-15736f68d4bc-builder-dockercfg-xrr5t-push\") pod \"2e01132f-5523-466d-8b53-15736f68d4bc\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.881795 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-ca-bundles\") pod \"2e01132f-5523-466d-8b53-15736f68d4bc\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.881868 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-system-configs\") pod \"2e01132f-5523-466d-8b53-15736f68d4bc\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.881896 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/2e01132f-5523-466d-8b53-15736f68d4bc-builder-dockercfg-xrr5t-pull\") pod \"2e01132f-5523-466d-8b53-15736f68d4bc\" (UID: \"2e01132f-5523-466d-8b53-15736f68d4bc\") " Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.881929 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "2e01132f-5523-466d-8b53-15736f68d4bc" (UID: "2e01132f-5523-466d-8b53-15736f68d4bc"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.882195 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9xpv\" (UniqueName: \"kubernetes.io/projected/2bb1c8ef-9f2f-4625-a163-65348f5431b5-kube-api-access-h9xpv\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.882246 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2bb1c8ef-9f2f-4625-a163-65348f5431b5-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.882293 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/2bb1c8ef-9f2f-4625-a163-65348f5431b5-builder-dockercfg-xrr5t-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.882311 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "2e01132f-5523-466d-8b53-15736f68d4bc" (UID: "2e01132f-5523-466d-8b53-15736f68d4bc"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.882322 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2bb1c8ef-9f2f-4625-a163-65348f5431b5-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.882351 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e01132f-5523-466d-8b53-15736f68d4bc-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "2e01132f-5523-466d-8b53-15736f68d4bc" (UID: "2e01132f-5523-466d-8b53-15736f68d4bc"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.882355 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.882475 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.882545 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.882701 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.882745 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.882822 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/2bb1c8ef-9f2f-4625-a163-65348f5431b5-builder-dockercfg-xrr5t-push\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.882890 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.882920 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.883006 4811 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2e01132f-5523-466d-8b53-15736f68d4bc-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.883031 4811 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2e01132f-5523-466d-8b53-15736f68d4bc-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.883044 4811 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.883061 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.883275 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.883417 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.883813 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "2e01132f-5523-466d-8b53-15736f68d4bc" (UID: "2e01132f-5523-466d-8b53-15736f68d4bc"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.884004 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "2e01132f-5523-466d-8b53-15736f68d4bc" (UID: "2e01132f-5523-466d-8b53-15736f68d4bc"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.883943 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.884516 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "2e01132f-5523-466d-8b53-15736f68d4bc" (UID: "2e01132f-5523-466d-8b53-15736f68d4bc"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.884793 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.884906 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2bb1c8ef-9f2f-4625-a163-65348f5431b5-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.885343 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "2e01132f-5523-466d-8b53-15736f68d4bc" (UID: "2e01132f-5523-466d-8b53-15736f68d4bc"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.885502 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.885566 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2bb1c8ef-9f2f-4625-a163-65348f5431b5-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.885637 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "2e01132f-5523-466d-8b53-15736f68d4bc" (UID: "2e01132f-5523-466d-8b53-15736f68d4bc"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.885655 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.885900 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.892192 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e01132f-5523-466d-8b53-15736f68d4bc-builder-dockercfg-xrr5t-push" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-push") pod "2e01132f-5523-466d-8b53-15736f68d4bc" (UID: "2e01132f-5523-466d-8b53-15736f68d4bc"). InnerVolumeSpecName "builder-dockercfg-xrr5t-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.892223 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e01132f-5523-466d-8b53-15736f68d4bc-builder-dockercfg-xrr5t-pull" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-pull") pod "2e01132f-5523-466d-8b53-15736f68d4bc" (UID: "2e01132f-5523-466d-8b53-15736f68d4bc"). InnerVolumeSpecName "builder-dockercfg-xrr5t-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.892904 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/2bb1c8ef-9f2f-4625-a163-65348f5431b5-builder-dockercfg-xrr5t-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.895095 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e01132f-5523-466d-8b53-15736f68d4bc-kube-api-access-pwr7x" (OuterVolumeSpecName: "kube-api-access-pwr7x") pod "2e01132f-5523-466d-8b53-15736f68d4bc" (UID: "2e01132f-5523-466d-8b53-15736f68d4bc"). InnerVolumeSpecName "kube-api-access-pwr7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.902629 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/2bb1c8ef-9f2f-4625-a163-65348f5431b5-builder-dockercfg-xrr5t-push\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.906047 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9xpv\" (UniqueName: \"kubernetes.io/projected/2bb1c8ef-9f2f-4625-a163-65348f5431b5-kube-api-access-h9xpv\") pod \"service-telemetry-operator-2-build\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.985024 4811 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.985063 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/2e01132f-5523-466d-8b53-15736f68d4bc-builder-dockercfg-xrr5t-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.985079 4811 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.985095 4811 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2e01132f-5523-466d-8b53-15736f68d4bc-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.985108 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/2e01132f-5523-466d-8b53-15736f68d4bc-builder-dockercfg-xrr5t-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.985120 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.985133 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwr7x\" (UniqueName: \"kubernetes.io/projected/2e01132f-5523-466d-8b53-15736f68d4bc-kube-api-access-pwr7x\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:38 crc kubenswrapper[4811]: I1203 00:18:38.985145 4811 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2e01132f-5523-466d-8b53-15736f68d4bc-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:39 crc kubenswrapper[4811]: I1203 00:18:39.000100 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:18:39 crc kubenswrapper[4811]: I1203 00:18:39.452305 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 03 00:18:39 crc kubenswrapper[4811]: I1203 00:18:39.452752 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"de7ccb94-f641-49de-b976-5171b761c8bd","Type":"ContainerStarted","Data":"4ccfb4da231d14e91e6a7207f5b48172d585704940261466de1a1f288fd0919e"} Dec 03 00:18:39 crc kubenswrapper[4811]: I1203 00:18:39.456948 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_2e01132f-5523-466d-8b53-15736f68d4bc/manage-dockerfile/0.log" Dec 03 00:18:39 crc kubenswrapper[4811]: I1203 00:18:39.456995 4811 generic.go:334] "Generic (PLEG): container finished" podID="2e01132f-5523-466d-8b53-15736f68d4bc" containerID="c775212d9cd0fce27293f7e0e2d4efaca59a7df708e638f52fa67f12c11b8f87" exitCode=1 Dec 03 00:18:39 crc kubenswrapper[4811]: I1203 00:18:39.457022 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"2e01132f-5523-466d-8b53-15736f68d4bc","Type":"ContainerDied","Data":"c775212d9cd0fce27293f7e0e2d4efaca59a7df708e638f52fa67f12c11b8f87"} Dec 03 00:18:39 crc kubenswrapper[4811]: I1203 00:18:39.457040 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"2e01132f-5523-466d-8b53-15736f68d4bc","Type":"ContainerDied","Data":"45f4043c13026dac129879a06be00469ec2d37ff9eeee11f933a4cbf154fb07f"} Dec 03 00:18:39 crc kubenswrapper[4811]: I1203 00:18:39.457060 4811 scope.go:117] "RemoveContainer" containerID="c775212d9cd0fce27293f7e0e2d4efaca59a7df708e638f52fa67f12c11b8f87" Dec 03 00:18:39 crc kubenswrapper[4811]: I1203 00:18:39.457161 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:18:39 crc kubenswrapper[4811]: I1203 00:18:39.514855 4811 scope.go:117] "RemoveContainer" containerID="c775212d9cd0fce27293f7e0e2d4efaca59a7df708e638f52fa67f12c11b8f87" Dec 03 00:18:39 crc kubenswrapper[4811]: E1203 00:18:39.515318 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c775212d9cd0fce27293f7e0e2d4efaca59a7df708e638f52fa67f12c11b8f87\": container with ID starting with c775212d9cd0fce27293f7e0e2d4efaca59a7df708e638f52fa67f12c11b8f87 not found: ID does not exist" containerID="c775212d9cd0fce27293f7e0e2d4efaca59a7df708e638f52fa67f12c11b8f87" Dec 03 00:18:39 crc kubenswrapper[4811]: I1203 00:18:39.515354 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c775212d9cd0fce27293f7e0e2d4efaca59a7df708e638f52fa67f12c11b8f87"} err="failed to get container status \"c775212d9cd0fce27293f7e0e2d4efaca59a7df708e638f52fa67f12c11b8f87\": rpc error: code = NotFound desc = could not find container \"c775212d9cd0fce27293f7e0e2d4efaca59a7df708e638f52fa67f12c11b8f87\": container with ID starting with c775212d9cd0fce27293f7e0e2d4efaca59a7df708e638f52fa67f12c11b8f87 not found: ID does not exist" Dec 03 00:18:39 crc kubenswrapper[4811]: I1203 00:18:39.527468 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:18:39 crc kubenswrapper[4811]: I1203 00:18:39.540665 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:18:39 crc kubenswrapper[4811]: I1203 00:18:39.840225 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-s74g6" Dec 03 00:18:40 crc kubenswrapper[4811]: I1203 00:18:40.129079 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e01132f-5523-466d-8b53-15736f68d4bc" path="/var/lib/kubelet/pods/2e01132f-5523-466d-8b53-15736f68d4bc/volumes" Dec 03 00:18:40 crc kubenswrapper[4811]: I1203 00:18:40.464280 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"2bb1c8ef-9f2f-4625-a163-65348f5431b5","Type":"ContainerStarted","Data":"cc7c6a559f18093993ae5a7c010ac01bccda41a870e58596ec8f01b5f7dc051b"} Dec 03 00:18:40 crc kubenswrapper[4811]: I1203 00:18:40.464337 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"2bb1c8ef-9f2f-4625-a163-65348f5431b5","Type":"ContainerStarted","Data":"edfc15f94fa0bd94613c3627044b9d9107129fd40011a557f829bb1f7450b8ad"} Dec 03 00:18:41 crc kubenswrapper[4811]: I1203 00:18:41.486890 4811 generic.go:334] "Generic (PLEG): container finished" podID="de7ccb94-f641-49de-b976-5171b761c8bd" containerID="4ccfb4da231d14e91e6a7207f5b48172d585704940261466de1a1f288fd0919e" exitCode=0 Dec 03 00:18:41 crc kubenswrapper[4811]: I1203 00:18:41.487004 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"de7ccb94-f641-49de-b976-5171b761c8bd","Type":"ContainerDied","Data":"4ccfb4da231d14e91e6a7207f5b48172d585704940261466de1a1f288fd0919e"} Dec 03 00:18:43 crc kubenswrapper[4811]: I1203 00:18:43.509017 4811 generic.go:334] "Generic (PLEG): container finished" podID="de7ccb94-f641-49de-b976-5171b761c8bd" containerID="47e8b678cb9d49a9ef5dc6aed01eda16caa3ff14422b2522bd7f9bfaf370bb1c" exitCode=0 Dec 03 00:18:43 crc kubenswrapper[4811]: I1203 00:18:43.509099 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"de7ccb94-f641-49de-b976-5171b761c8bd","Type":"ContainerDied","Data":"47e8b678cb9d49a9ef5dc6aed01eda16caa3ff14422b2522bd7f9bfaf370bb1c"} Dec 03 00:18:44 crc kubenswrapper[4811]: I1203 00:18:44.519563 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"de7ccb94-f641-49de-b976-5171b761c8bd","Type":"ContainerStarted","Data":"55c6d591c6cc800dead2d6276ce6c0a5f674a2d0b9a79cc7150c53d56e8ca23e"} Dec 03 00:18:44 crc kubenswrapper[4811]: I1203 00:18:44.520151 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:18:44 crc kubenswrapper[4811]: I1203 00:18:44.571652 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.412769266 podStartE2EDuration="49.571620143s" podCreationTimestamp="2025-12-03 00:17:55 +0000 UTC" firstStartedPulling="2025-12-03 00:17:55.913139758 +0000 UTC m=+716.054969230" lastFinishedPulling="2025-12-03 00:18:39.071990635 +0000 UTC m=+759.213820107" observedRunningTime="2025-12-03 00:18:44.566598548 +0000 UTC m=+764.708428060" watchObservedRunningTime="2025-12-03 00:18:44.571620143 +0000 UTC m=+764.713449615" Dec 03 00:18:50 crc kubenswrapper[4811]: I1203 00:18:50.565176 4811 generic.go:334] "Generic (PLEG): container finished" podID="2bb1c8ef-9f2f-4625-a163-65348f5431b5" containerID="cc7c6a559f18093993ae5a7c010ac01bccda41a870e58596ec8f01b5f7dc051b" exitCode=0 Dec 03 00:18:50 crc kubenswrapper[4811]: I1203 00:18:50.565317 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"2bb1c8ef-9f2f-4625-a163-65348f5431b5","Type":"ContainerDied","Data":"cc7c6a559f18093993ae5a7c010ac01bccda41a870e58596ec8f01b5f7dc051b"} Dec 03 00:18:51 crc kubenswrapper[4811]: I1203 00:18:51.576707 4811 generic.go:334] "Generic (PLEG): container finished" podID="2bb1c8ef-9f2f-4625-a163-65348f5431b5" containerID="1ccddd306b1db2712125a357be93375f04aa831e4450ba212275890355879a70" exitCode=0 Dec 03 00:18:51 crc kubenswrapper[4811]: I1203 00:18:51.576790 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"2bb1c8ef-9f2f-4625-a163-65348f5431b5","Type":"ContainerDied","Data":"1ccddd306b1db2712125a357be93375f04aa831e4450ba212275890355879a70"} Dec 03 00:18:51 crc kubenswrapper[4811]: I1203 00:18:51.654298 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_2bb1c8ef-9f2f-4625-a163-65348f5431b5/manage-dockerfile/0.log" Dec 03 00:18:52 crc kubenswrapper[4811]: I1203 00:18:52.585789 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"2bb1c8ef-9f2f-4625-a163-65348f5431b5","Type":"ContainerStarted","Data":"c0af7d4ac848a5be5769f8c258255bd5b66d6d1189caf5b010abd9035a6f4926"} Dec 03 00:18:52 crc kubenswrapper[4811]: I1203 00:18:52.619148 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=14.61912076 podStartE2EDuration="14.61912076s" podCreationTimestamp="2025-12-03 00:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:18:52.614684311 +0000 UTC m=+772.756513783" watchObservedRunningTime="2025-12-03 00:18:52.61912076 +0000 UTC m=+772.760950232" Dec 03 00:18:55 crc kubenswrapper[4811]: I1203 00:18:55.811956 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="de7ccb94-f641-49de-b976-5171b761c8bd" containerName="elasticsearch" probeResult="failure" output=< Dec 03 00:18:55 crc kubenswrapper[4811]: {"timestamp": "2025-12-03T00:18:55+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 03 00:18:55 crc kubenswrapper[4811]: > Dec 03 00:19:01 crc kubenswrapper[4811]: I1203 00:19:01.270241 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:19:02 crc kubenswrapper[4811]: I1203 00:19:02.941027 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:19:02 crc kubenswrapper[4811]: I1203 00:19:02.941131 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:19:32 crc kubenswrapper[4811]: I1203 00:19:32.940194 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:19:32 crc kubenswrapper[4811]: I1203 00:19:32.940870 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:20:02 crc kubenswrapper[4811]: I1203 00:20:02.940459 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:20:02 crc kubenswrapper[4811]: I1203 00:20:02.941285 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:20:02 crc kubenswrapper[4811]: I1203 00:20:02.941336 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:20:02 crc kubenswrapper[4811]: I1203 00:20:02.941984 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6b2aa2a2ddd7fd20474659fdb1c709c86b66b5560f41a3dec0d4ef06fe80f30"} pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:20:02 crc kubenswrapper[4811]: I1203 00:20:02.942030 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" containerID="cri-o://e6b2aa2a2ddd7fd20474659fdb1c709c86b66b5560f41a3dec0d4ef06fe80f30" gracePeriod=600 Dec 03 00:20:03 crc kubenswrapper[4811]: I1203 00:20:03.120772 4811 generic.go:334] "Generic (PLEG): container finished" podID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerID="e6b2aa2a2ddd7fd20474659fdb1c709c86b66b5560f41a3dec0d4ef06fe80f30" exitCode=0 Dec 03 00:20:03 crc kubenswrapper[4811]: I1203 00:20:03.120852 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerDied","Data":"e6b2aa2a2ddd7fd20474659fdb1c709c86b66b5560f41a3dec0d4ef06fe80f30"} Dec 03 00:20:03 crc kubenswrapper[4811]: I1203 00:20:03.121203 4811 scope.go:117] "RemoveContainer" containerID="7fbc3e78d8acc5df7781522124d991f4e42780ce8b0fd9b01a7c2846f764d716" Dec 03 00:20:04 crc kubenswrapper[4811]: I1203 00:20:04.130139 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerStarted","Data":"874b8048feed7e191debfdbcf8853f72ec34ff95af49474a68f75504656f9153"} Dec 03 00:20:43 crc kubenswrapper[4811]: I1203 00:20:43.415699 4811 generic.go:334] "Generic (PLEG): container finished" podID="2bb1c8ef-9f2f-4625-a163-65348f5431b5" containerID="c0af7d4ac848a5be5769f8c258255bd5b66d6d1189caf5b010abd9035a6f4926" exitCode=0 Dec 03 00:20:43 crc kubenswrapper[4811]: I1203 00:20:43.415761 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"2bb1c8ef-9f2f-4625-a163-65348f5431b5","Type":"ContainerDied","Data":"c0af7d4ac848a5be5769f8c258255bd5b66d6d1189caf5b010abd9035a6f4926"} Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.769892 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.811224 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-proxy-ca-bundles\") pod \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.811311 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-ca-bundles\") pod \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.812524 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "2bb1c8ef-9f2f-4625-a163-65348f5431b5" (UID: "2bb1c8ef-9f2f-4625-a163-65348f5431b5"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.812626 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "2bb1c8ef-9f2f-4625-a163-65348f5431b5" (UID: "2bb1c8ef-9f2f-4625-a163-65348f5431b5"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.912985 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/2bb1c8ef-9f2f-4625-a163-65348f5431b5-builder-dockercfg-xrr5t-push\") pod \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.913350 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-blob-cache\") pod \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.913516 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/2bb1c8ef-9f2f-4625-a163-65348f5431b5-builder-dockercfg-xrr5t-pull\") pod \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.913619 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-container-storage-root\") pod \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.913745 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-system-configs\") pod \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.913848 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2bb1c8ef-9f2f-4625-a163-65348f5431b5-node-pullsecrets\") pod \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.913976 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-buildworkdir\") pod \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.914052 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2bb1c8ef-9f2f-4625-a163-65348f5431b5-buildcachedir\") pod \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.914136 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-container-storage-run\") pod \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.914136 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "2bb1c8ef-9f2f-4625-a163-65348f5431b5" (UID: "2bb1c8ef-9f2f-4625-a163-65348f5431b5"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.914185 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bb1c8ef-9f2f-4625-a163-65348f5431b5-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "2bb1c8ef-9f2f-4625-a163-65348f5431b5" (UID: "2bb1c8ef-9f2f-4625-a163-65348f5431b5"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.914239 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bb1c8ef-9f2f-4625-a163-65348f5431b5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "2bb1c8ef-9f2f-4625-a163-65348f5431b5" (UID: "2bb1c8ef-9f2f-4625-a163-65348f5431b5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.914249 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9xpv\" (UniqueName: \"kubernetes.io/projected/2bb1c8ef-9f2f-4625-a163-65348f5431b5-kube-api-access-h9xpv\") pod \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\" (UID: \"2bb1c8ef-9f2f-4625-a163-65348f5431b5\") " Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.914760 4811 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2bb1c8ef-9f2f-4625-a163-65348f5431b5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.914845 4811 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2bb1c8ef-9f2f-4625-a163-65348f5431b5-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.914916 4811 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.914993 4811 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.915067 4811 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.915071 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "2bb1c8ef-9f2f-4625-a163-65348f5431b5" (UID: "2bb1c8ef-9f2f-4625-a163-65348f5431b5"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.918918 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bb1c8ef-9f2f-4625-a163-65348f5431b5-builder-dockercfg-xrr5t-push" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-push") pod "2bb1c8ef-9f2f-4625-a163-65348f5431b5" (UID: "2bb1c8ef-9f2f-4625-a163-65348f5431b5"). InnerVolumeSpecName "builder-dockercfg-xrr5t-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.919282 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bb1c8ef-9f2f-4625-a163-65348f5431b5-builder-dockercfg-xrr5t-pull" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-pull") pod "2bb1c8ef-9f2f-4625-a163-65348f5431b5" (UID: "2bb1c8ef-9f2f-4625-a163-65348f5431b5"). InnerVolumeSpecName "builder-dockercfg-xrr5t-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.919551 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bb1c8ef-9f2f-4625-a163-65348f5431b5-kube-api-access-h9xpv" (OuterVolumeSpecName: "kube-api-access-h9xpv") pod "2bb1c8ef-9f2f-4625-a163-65348f5431b5" (UID: "2bb1c8ef-9f2f-4625-a163-65348f5431b5"). InnerVolumeSpecName "kube-api-access-h9xpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:20:44 crc kubenswrapper[4811]: I1203 00:20:44.947109 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "2bb1c8ef-9f2f-4625-a163-65348f5431b5" (UID: "2bb1c8ef-9f2f-4625-a163-65348f5431b5"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:45 crc kubenswrapper[4811]: I1203 00:20:45.017186 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/2bb1c8ef-9f2f-4625-a163-65348f5431b5-builder-dockercfg-xrr5t-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:45 crc kubenswrapper[4811]: I1203 00:20:45.017237 4811 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:45 crc kubenswrapper[4811]: I1203 00:20:45.017249 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:45 crc kubenswrapper[4811]: I1203 00:20:45.017272 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9xpv\" (UniqueName: \"kubernetes.io/projected/2bb1c8ef-9f2f-4625-a163-65348f5431b5-kube-api-access-h9xpv\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:45 crc kubenswrapper[4811]: I1203 00:20:45.017283 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/2bb1c8ef-9f2f-4625-a163-65348f5431b5-builder-dockercfg-xrr5t-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:45 crc kubenswrapper[4811]: I1203 00:20:45.118541 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "2bb1c8ef-9f2f-4625-a163-65348f5431b5" (UID: "2bb1c8ef-9f2f-4625-a163-65348f5431b5"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:45 crc kubenswrapper[4811]: I1203 00:20:45.219586 4811 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:45 crc kubenswrapper[4811]: I1203 00:20:45.436608 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"2bb1c8ef-9f2f-4625-a163-65348f5431b5","Type":"ContainerDied","Data":"edfc15f94fa0bd94613c3627044b9d9107129fd40011a557f829bb1f7450b8ad"} Dec 03 00:20:45 crc kubenswrapper[4811]: I1203 00:20:45.437132 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edfc15f94fa0bd94613c3627044b9d9107129fd40011a557f829bb1f7450b8ad" Dec 03 00:20:45 crc kubenswrapper[4811]: I1203 00:20:45.437408 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:20:47 crc kubenswrapper[4811]: I1203 00:20:47.084511 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "2bb1c8ef-9f2f-4625-a163-65348f5431b5" (UID: "2bb1c8ef-9f2f-4625-a163-65348f5431b5"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:47 crc kubenswrapper[4811]: I1203 00:20:47.143737 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2bb1c8ef-9f2f-4625-a163-65348f5431b5-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.832295 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:20:49 crc kubenswrapper[4811]: E1203 00:20:49.833017 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bb1c8ef-9f2f-4625-a163-65348f5431b5" containerName="git-clone" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.833038 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bb1c8ef-9f2f-4625-a163-65348f5431b5" containerName="git-clone" Dec 03 00:20:49 crc kubenswrapper[4811]: E1203 00:20:49.833061 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bb1c8ef-9f2f-4625-a163-65348f5431b5" containerName="docker-build" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.833071 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bb1c8ef-9f2f-4625-a163-65348f5431b5" containerName="docker-build" Dec 03 00:20:49 crc kubenswrapper[4811]: E1203 00:20:49.833085 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bb1c8ef-9f2f-4625-a163-65348f5431b5" containerName="manage-dockerfile" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.833112 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bb1c8ef-9f2f-4625-a163-65348f5431b5" containerName="manage-dockerfile" Dec 03 00:20:49 crc kubenswrapper[4811]: E1203 00:20:49.833136 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e01132f-5523-466d-8b53-15736f68d4bc" containerName="manage-dockerfile" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.833147 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e01132f-5523-466d-8b53-15736f68d4bc" containerName="manage-dockerfile" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.833413 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e01132f-5523-466d-8b53-15736f68d4bc" containerName="manage-dockerfile" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.833433 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bb1c8ef-9f2f-4625-a163-65348f5431b5" containerName="docker-build" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.834472 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.836888 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.838572 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.838898 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-xrr5t" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.841092 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.853370 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.910473 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8ad5aee2-ef50-47fa-963c-3510da5b8070-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.910646 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.910742 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/8ad5aee2-ef50-47fa-963c-3510da5b8070-builder-dockercfg-xrr5t-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.910837 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.910915 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.910945 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.910971 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8ad5aee2-ef50-47fa-963c-3510da5b8070-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.910991 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/8ad5aee2-ef50-47fa-963c-3510da5b8070-builder-dockercfg-xrr5t-push\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.911015 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.911040 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.911107 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:49 crc kubenswrapper[4811]: I1203 00:20:49.911131 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pclk4\" (UniqueName: \"kubernetes.io/projected/8ad5aee2-ef50-47fa-963c-3510da5b8070-kube-api-access-pclk4\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.011890 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8ad5aee2-ef50-47fa-963c-3510da5b8070-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.012093 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8ad5aee2-ef50-47fa-963c-3510da5b8070-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.012442 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/8ad5aee2-ef50-47fa-963c-3510da5b8070-builder-dockercfg-xrr5t-push\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.012623 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.012799 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.012979 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.013192 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pclk4\" (UniqueName: \"kubernetes.io/projected/8ad5aee2-ef50-47fa-963c-3510da5b8070-kube-api-access-pclk4\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.013414 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8ad5aee2-ef50-47fa-963c-3510da5b8070-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.013638 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.013787 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.013641 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.013485 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.013525 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8ad5aee2-ef50-47fa-963c-3510da5b8070-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.013808 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/8ad5aee2-ef50-47fa-963c-3510da5b8070-builder-dockercfg-xrr5t-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.014019 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.014069 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.014118 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.014538 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.014715 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.014789 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.015133 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.019156 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/8ad5aee2-ef50-47fa-963c-3510da5b8070-builder-dockercfg-xrr5t-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.019228 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/8ad5aee2-ef50-47fa-963c-3510da5b8070-builder-dockercfg-xrr5t-push\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.047131 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pclk4\" (UniqueName: \"kubernetes.io/projected/8ad5aee2-ef50-47fa-963c-3510da5b8070-kube-api-access-pclk4\") pod \"smart-gateway-operator-1-build\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.152767 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:20:50 crc kubenswrapper[4811]: I1203 00:20:50.607368 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:20:51 crc kubenswrapper[4811]: E1203 00:20:51.268185 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad5aee2_ef50_47fa_963c_3510da5b8070.slice/crio-conmon-ecdd8bfdea62b0d96388096ed93cd7e58b31fb36e6a7f144e4c8907825dce04f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad5aee2_ef50_47fa_963c_3510da5b8070.slice/crio-ecdd8bfdea62b0d96388096ed93cd7e58b31fb36e6a7f144e4c8907825dce04f.scope\": RecentStats: unable to find data in memory cache]" Dec 03 00:20:51 crc kubenswrapper[4811]: I1203 00:20:51.496513 4811 generic.go:334] "Generic (PLEG): container finished" podID="8ad5aee2-ef50-47fa-963c-3510da5b8070" containerID="ecdd8bfdea62b0d96388096ed93cd7e58b31fb36e6a7f144e4c8907825dce04f" exitCode=0 Dec 03 00:20:51 crc kubenswrapper[4811]: I1203 00:20:51.496584 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"8ad5aee2-ef50-47fa-963c-3510da5b8070","Type":"ContainerDied","Data":"ecdd8bfdea62b0d96388096ed93cd7e58b31fb36e6a7f144e4c8907825dce04f"} Dec 03 00:20:51 crc kubenswrapper[4811]: I1203 00:20:51.496666 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"8ad5aee2-ef50-47fa-963c-3510da5b8070","Type":"ContainerStarted","Data":"2cd1e1d99141e38884ebedb9e87a50a103c189fc75d1086660a1e4b19ad9c97d"} Dec 03 00:20:52 crc kubenswrapper[4811]: I1203 00:20:52.508675 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"8ad5aee2-ef50-47fa-963c-3510da5b8070","Type":"ContainerStarted","Data":"0f40d5b7546f68e7ebfb0a3b25379d43b326ada8f882ba5dc3d69a3d71084f4c"} Dec 03 00:20:52 crc kubenswrapper[4811]: I1203 00:20:52.546618 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=3.546584685 podStartE2EDuration="3.546584685s" podCreationTimestamp="2025-12-03 00:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:20:52.536797122 +0000 UTC m=+892.678626594" watchObservedRunningTime="2025-12-03 00:20:52.546584685 +0000 UTC m=+892.688414197" Dec 03 00:20:59 crc kubenswrapper[4811]: I1203 00:20:59.614369 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x4vqf"] Dec 03 00:20:59 crc kubenswrapper[4811]: I1203 00:20:59.616601 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:20:59 crc kubenswrapper[4811]: I1203 00:20:59.640187 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4vqf"] Dec 03 00:20:59 crc kubenswrapper[4811]: I1203 00:20:59.803199 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-utilities\") pod \"community-operators-x4vqf\" (UID: \"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7\") " pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:20:59 crc kubenswrapper[4811]: I1203 00:20:59.803277 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4hpf\" (UniqueName: \"kubernetes.io/projected/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-kube-api-access-m4hpf\") pod \"community-operators-x4vqf\" (UID: \"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7\") " pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:20:59 crc kubenswrapper[4811]: I1203 00:20:59.803552 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-catalog-content\") pod \"community-operators-x4vqf\" (UID: \"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7\") " pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:20:59 crc kubenswrapper[4811]: I1203 00:20:59.904788 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-catalog-content\") pod \"community-operators-x4vqf\" (UID: \"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7\") " pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:20:59 crc kubenswrapper[4811]: I1203 00:20:59.904903 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-utilities\") pod \"community-operators-x4vqf\" (UID: \"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7\") " pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:20:59 crc kubenswrapper[4811]: I1203 00:20:59.904940 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4hpf\" (UniqueName: \"kubernetes.io/projected/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-kube-api-access-m4hpf\") pod \"community-operators-x4vqf\" (UID: \"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7\") " pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:20:59 crc kubenswrapper[4811]: I1203 00:20:59.905558 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-catalog-content\") pod \"community-operators-x4vqf\" (UID: \"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7\") " pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:20:59 crc kubenswrapper[4811]: I1203 00:20:59.905640 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-utilities\") pod \"community-operators-x4vqf\" (UID: \"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7\") " pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:20:59 crc kubenswrapper[4811]: I1203 00:20:59.941559 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4hpf\" (UniqueName: \"kubernetes.io/projected/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-kube-api-access-m4hpf\") pod \"community-operators-x4vqf\" (UID: \"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7\") " pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:21:00 crc kubenswrapper[4811]: I1203 00:21:00.239059 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:21:00 crc kubenswrapper[4811]: I1203 00:21:00.394782 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:21:00 crc kubenswrapper[4811]: I1203 00:21:00.395096 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="8ad5aee2-ef50-47fa-963c-3510da5b8070" containerName="docker-build" containerID="cri-o://0f40d5b7546f68e7ebfb0a3b25379d43b326ada8f882ba5dc3d69a3d71084f4c" gracePeriod=30 Dec 03 00:21:00 crc kubenswrapper[4811]: I1203 00:21:00.454840 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4vqf"] Dec 03 00:21:00 crc kubenswrapper[4811]: I1203 00:21:00.579041 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4vqf" event={"ID":"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7","Type":"ContainerStarted","Data":"b3ca7e48f3789f71e33b884fe009ea7a09417231f7a0e53fc74d29d5bc37d1e4"} Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.169143 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.171202 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.173581 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.173600 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.177349 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.192145 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.340321 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ece903b-ffa0-4dcd-8116-38e482b406e6-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.340373 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/4ece903b-ffa0-4dcd-8116-38e482b406e6-builder-dockercfg-xrr5t-push\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.340414 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.340448 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.340468 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/4ece903b-ffa0-4dcd-8116-38e482b406e6-builder-dockercfg-xrr5t-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.340488 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.340509 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.340530 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.340559 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ece903b-ffa0-4dcd-8116-38e482b406e6-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.340579 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.340842 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.340966 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzwxv\" (UniqueName: \"kubernetes.io/projected/4ece903b-ffa0-4dcd-8116-38e482b406e6-kube-api-access-nzwxv\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.442366 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.443179 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.443050 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzwxv\" (UniqueName: \"kubernetes.io/projected/4ece903b-ffa0-4dcd-8116-38e482b406e6-kube-api-access-nzwxv\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.444847 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/4ece903b-ffa0-4dcd-8116-38e482b406e6-builder-dockercfg-xrr5t-push\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.445019 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ece903b-ffa0-4dcd-8116-38e482b406e6-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.445207 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.445488 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.445594 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/4ece903b-ffa0-4dcd-8116-38e482b406e6-builder-dockercfg-xrr5t-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.445695 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.445778 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.445880 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.445998 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ece903b-ffa0-4dcd-8116-38e482b406e6-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.446103 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.446302 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.446500 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ece903b-ffa0-4dcd-8116-38e482b406e6-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.446985 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.447089 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ece903b-ffa0-4dcd-8116-38e482b406e6-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.447671 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.447750 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.447915 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.448446 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.453889 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/4ece903b-ffa0-4dcd-8116-38e482b406e6-builder-dockercfg-xrr5t-push\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.454901 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/4ece903b-ffa0-4dcd-8116-38e482b406e6-builder-dockercfg-xrr5t-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.467700 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzwxv\" (UniqueName: \"kubernetes.io/projected/4ece903b-ffa0-4dcd-8116-38e482b406e6-kube-api-access-nzwxv\") pod \"smart-gateway-operator-2-build\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.487766 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:21:02 crc kubenswrapper[4811]: I1203 00:21:02.742237 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Dec 03 00:21:03 crc kubenswrapper[4811]: I1203 00:21:03.604409 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"4ece903b-ffa0-4dcd-8116-38e482b406e6","Type":"ContainerStarted","Data":"2b0ef26f6be97652386059e41abeb9f57d4dc3560ab401f8810f5a02effd8b01"} Dec 03 00:21:04 crc kubenswrapper[4811]: I1203 00:21:04.615443 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4vqf" event={"ID":"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7","Type":"ContainerStarted","Data":"63448658b2b38d5d30f13ae02f8f0e6f5045c9e2a002f273d54743b3636e23f6"} Dec 03 00:21:05 crc kubenswrapper[4811]: I1203 00:21:05.733744 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_8ad5aee2-ef50-47fa-963c-3510da5b8070/docker-build/0.log" Dec 03 00:21:05 crc kubenswrapper[4811]: I1203 00:21:05.734998 4811 generic.go:334] "Generic (PLEG): container finished" podID="8ad5aee2-ef50-47fa-963c-3510da5b8070" containerID="0f40d5b7546f68e7ebfb0a3b25379d43b326ada8f882ba5dc3d69a3d71084f4c" exitCode=1 Dec 03 00:21:05 crc kubenswrapper[4811]: I1203 00:21:05.735054 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"8ad5aee2-ef50-47fa-963c-3510da5b8070","Type":"ContainerDied","Data":"0f40d5b7546f68e7ebfb0a3b25379d43b326ada8f882ba5dc3d69a3d71084f4c"} Dec 03 00:21:05 crc kubenswrapper[4811]: I1203 00:21:05.984061 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_8ad5aee2-ef50-47fa-963c-3510da5b8070/docker-build/0.log" Dec 03 00:21:05 crc kubenswrapper[4811]: I1203 00:21:05.984918 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.102472 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-container-storage-run\") pod \"8ad5aee2-ef50-47fa-963c-3510da5b8070\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.102529 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pclk4\" (UniqueName: \"kubernetes.io/projected/8ad5aee2-ef50-47fa-963c-3510da5b8070-kube-api-access-pclk4\") pod \"8ad5aee2-ef50-47fa-963c-3510da5b8070\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.102559 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/8ad5aee2-ef50-47fa-963c-3510da5b8070-builder-dockercfg-xrr5t-pull\") pod \"8ad5aee2-ef50-47fa-963c-3510da5b8070\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.102627 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-ca-bundles\") pod \"8ad5aee2-ef50-47fa-963c-3510da5b8070\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.102696 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-buildworkdir\") pod \"8ad5aee2-ef50-47fa-963c-3510da5b8070\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.102720 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-container-storage-root\") pod \"8ad5aee2-ef50-47fa-963c-3510da5b8070\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.103648 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "8ad5aee2-ef50-47fa-963c-3510da5b8070" (UID: "8ad5aee2-ef50-47fa-963c-3510da5b8070"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.103733 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "8ad5aee2-ef50-47fa-963c-3510da5b8070" (UID: "8ad5aee2-ef50-47fa-963c-3510da5b8070"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.103761 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "8ad5aee2-ef50-47fa-963c-3510da5b8070" (UID: "8ad5aee2-ef50-47fa-963c-3510da5b8070"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.103803 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8ad5aee2-ef50-47fa-963c-3510da5b8070-buildcachedir\") pod \"8ad5aee2-ef50-47fa-963c-3510da5b8070\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.103829 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-blob-cache\") pod \"8ad5aee2-ef50-47fa-963c-3510da5b8070\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.103863 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ad5aee2-ef50-47fa-963c-3510da5b8070-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "8ad5aee2-ef50-47fa-963c-3510da5b8070" (UID: "8ad5aee2-ef50-47fa-963c-3510da5b8070"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.103895 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8ad5aee2-ef50-47fa-963c-3510da5b8070-node-pullsecrets\") pod \"8ad5aee2-ef50-47fa-963c-3510da5b8070\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.104039 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ad5aee2-ef50-47fa-963c-3510da5b8070-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8ad5aee2-ef50-47fa-963c-3510da5b8070" (UID: "8ad5aee2-ef50-47fa-963c-3510da5b8070"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.113407 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ad5aee2-ef50-47fa-963c-3510da5b8070-builder-dockercfg-xrr5t-pull" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-pull") pod "8ad5aee2-ef50-47fa-963c-3510da5b8070" (UID: "8ad5aee2-ef50-47fa-963c-3510da5b8070"). InnerVolumeSpecName "builder-dockercfg-xrr5t-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.113420 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ad5aee2-ef50-47fa-963c-3510da5b8070-kube-api-access-pclk4" (OuterVolumeSpecName: "kube-api-access-pclk4") pod "8ad5aee2-ef50-47fa-963c-3510da5b8070" (UID: "8ad5aee2-ef50-47fa-963c-3510da5b8070"). InnerVolumeSpecName "kube-api-access-pclk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.117470 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-proxy-ca-bundles\") pod \"8ad5aee2-ef50-47fa-963c-3510da5b8070\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.117574 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-system-configs\") pod \"8ad5aee2-ef50-47fa-963c-3510da5b8070\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.117606 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/8ad5aee2-ef50-47fa-963c-3510da5b8070-builder-dockercfg-xrr5t-push\") pod \"8ad5aee2-ef50-47fa-963c-3510da5b8070\" (UID: \"8ad5aee2-ef50-47fa-963c-3510da5b8070\") " Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.117920 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "8ad5aee2-ef50-47fa-963c-3510da5b8070" (UID: "8ad5aee2-ef50-47fa-963c-3510da5b8070"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.118121 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.118137 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pclk4\" (UniqueName: \"kubernetes.io/projected/8ad5aee2-ef50-47fa-963c-3510da5b8070-kube-api-access-pclk4\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.118152 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/8ad5aee2-ef50-47fa-963c-3510da5b8070-builder-dockercfg-xrr5t-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.118166 4811 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.118178 4811 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.118189 4811 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8ad5aee2-ef50-47fa-963c-3510da5b8070-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.118200 4811 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8ad5aee2-ef50-47fa-963c-3510da5b8070-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.118210 4811 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.118633 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "8ad5aee2-ef50-47fa-963c-3510da5b8070" (UID: "8ad5aee2-ef50-47fa-963c-3510da5b8070"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.122751 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ad5aee2-ef50-47fa-963c-3510da5b8070-builder-dockercfg-xrr5t-push" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-push") pod "8ad5aee2-ef50-47fa-963c-3510da5b8070" (UID: "8ad5aee2-ef50-47fa-963c-3510da5b8070"). InnerVolumeSpecName "builder-dockercfg-xrr5t-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.220293 4811 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.221227 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/8ad5aee2-ef50-47fa-963c-3510da5b8070-builder-dockercfg-xrr5t-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.626740 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "8ad5aee2-ef50-47fa-963c-3510da5b8070" (UID: "8ad5aee2-ef50-47fa-963c-3510da5b8070"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.629065 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.744208 4811 generic.go:334] "Generic (PLEG): container finished" podID="e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" containerID="63448658b2b38d5d30f13ae02f8f0e6f5045c9e2a002f273d54743b3636e23f6" exitCode=0 Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.744328 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4vqf" event={"ID":"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7","Type":"ContainerDied","Data":"63448658b2b38d5d30f13ae02f8f0e6f5045c9e2a002f273d54743b3636e23f6"} Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.746923 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_8ad5aee2-ef50-47fa-963c-3510da5b8070/docker-build/0.log" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.747714 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.747751 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"8ad5aee2-ef50-47fa-963c-3510da5b8070","Type":"ContainerDied","Data":"2cd1e1d99141e38884ebedb9e87a50a103c189fc75d1086660a1e4b19ad9c97d"} Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.747800 4811 scope.go:117] "RemoveContainer" containerID="0f40d5b7546f68e7ebfb0a3b25379d43b326ada8f882ba5dc3d69a3d71084f4c" Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.753330 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"4ece903b-ffa0-4dcd-8116-38e482b406e6","Type":"ContainerStarted","Data":"aa798e970121815cbc85c20b487ef572e12b828b5cada6f64d6f0dc80c243c92"} Dec 03 00:21:06 crc kubenswrapper[4811]: I1203 00:21:06.829639 4811 scope.go:117] "RemoveContainer" containerID="ecdd8bfdea62b0d96388096ed93cd7e58b31fb36e6a7f144e4c8907825dce04f" Dec 03 00:21:07 crc kubenswrapper[4811]: I1203 00:21:07.642571 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "8ad5aee2-ef50-47fa-963c-3510da5b8070" (UID: "8ad5aee2-ef50-47fa-963c-3510da5b8070"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:07 crc kubenswrapper[4811]: I1203 00:21:07.648162 4811 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8ad5aee2-ef50-47fa-963c-3510da5b8070-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:07 crc kubenswrapper[4811]: I1203 00:21:07.692086 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:21:07 crc kubenswrapper[4811]: I1203 00:21:07.698382 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:21:08 crc kubenswrapper[4811]: I1203 00:21:08.137130 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ad5aee2-ef50-47fa-963c-3510da5b8070" path="/var/lib/kubelet/pods/8ad5aee2-ef50-47fa-963c-3510da5b8070/volumes" Dec 03 00:21:08 crc kubenswrapper[4811]: I1203 00:21:08.781574 4811 generic.go:334] "Generic (PLEG): container finished" podID="4ece903b-ffa0-4dcd-8116-38e482b406e6" containerID="aa798e970121815cbc85c20b487ef572e12b828b5cada6f64d6f0dc80c243c92" exitCode=0 Dec 03 00:21:08 crc kubenswrapper[4811]: I1203 00:21:08.781698 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"4ece903b-ffa0-4dcd-8116-38e482b406e6","Type":"ContainerDied","Data":"aa798e970121815cbc85c20b487ef572e12b828b5cada6f64d6f0dc80c243c92"} Dec 03 00:21:08 crc kubenswrapper[4811]: I1203 00:21:08.787112 4811 generic.go:334] "Generic (PLEG): container finished" podID="e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" containerID="faed1019f7c7c8b81aa1d48a815cb34f3c2020634a433f0c24d92f750a5eb697" exitCode=0 Dec 03 00:21:08 crc kubenswrapper[4811]: I1203 00:21:08.787169 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4vqf" event={"ID":"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7","Type":"ContainerDied","Data":"faed1019f7c7c8b81aa1d48a815cb34f3c2020634a433f0c24d92f750a5eb697"} Dec 03 00:21:09 crc kubenswrapper[4811]: I1203 00:21:09.808975 4811 generic.go:334] "Generic (PLEG): container finished" podID="4ece903b-ffa0-4dcd-8116-38e482b406e6" containerID="881f9c6e664050bfbb5744686a2a97b4322a8777c6d52820083d4f8d426a159e" exitCode=0 Dec 03 00:21:09 crc kubenswrapper[4811]: I1203 00:21:09.809036 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"4ece903b-ffa0-4dcd-8116-38e482b406e6","Type":"ContainerDied","Data":"881f9c6e664050bfbb5744686a2a97b4322a8777c6d52820083d4f8d426a159e"} Dec 03 00:21:09 crc kubenswrapper[4811]: I1203 00:21:09.816965 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4vqf" event={"ID":"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7","Type":"ContainerStarted","Data":"119a2dde28f540b924c782b375f546152653f4946183ef8bd88f8018368ad4a7"} Dec 03 00:21:09 crc kubenswrapper[4811]: I1203 00:21:09.849943 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_4ece903b-ffa0-4dcd-8116-38e482b406e6/manage-dockerfile/0.log" Dec 03 00:21:09 crc kubenswrapper[4811]: I1203 00:21:09.879412 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x4vqf" podStartSLOduration=8.382492974 podStartE2EDuration="10.879392453s" podCreationTimestamp="2025-12-03 00:20:59 +0000 UTC" firstStartedPulling="2025-12-03 00:21:06.74909948 +0000 UTC m=+906.890928952" lastFinishedPulling="2025-12-03 00:21:09.245998959 +0000 UTC m=+909.387828431" observedRunningTime="2025-12-03 00:21:09.877020803 +0000 UTC m=+910.018850275" watchObservedRunningTime="2025-12-03 00:21:09.879392453 +0000 UTC m=+910.021221915" Dec 03 00:21:10 crc kubenswrapper[4811]: I1203 00:21:10.241372 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:21:10 crc kubenswrapper[4811]: I1203 00:21:10.241782 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:21:10 crc kubenswrapper[4811]: I1203 00:21:10.830176 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"4ece903b-ffa0-4dcd-8116-38e482b406e6","Type":"ContainerStarted","Data":"c2fef2345025e126214fb514d860207168c276c65f8ef28081dfde543c19fe57"} Dec 03 00:21:10 crc kubenswrapper[4811]: I1203 00:21:10.860023 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=8.859997951 podStartE2EDuration="8.859997951s" podCreationTimestamp="2025-12-03 00:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:21:10.856726224 +0000 UTC m=+910.998555686" watchObservedRunningTime="2025-12-03 00:21:10.859997951 +0000 UTC m=+911.001827413" Dec 03 00:21:11 crc kubenswrapper[4811]: I1203 00:21:11.282667 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-x4vqf" podUID="e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" containerName="registry-server" probeResult="failure" output=< Dec 03 00:21:11 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Dec 03 00:21:11 crc kubenswrapper[4811]: > Dec 03 00:21:20 crc kubenswrapper[4811]: I1203 00:21:20.290828 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:21:20 crc kubenswrapper[4811]: I1203 00:21:20.345993 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:21:20 crc kubenswrapper[4811]: I1203 00:21:20.540881 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x4vqf"] Dec 03 00:21:21 crc kubenswrapper[4811]: I1203 00:21:21.912954 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x4vqf" podUID="e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" containerName="registry-server" containerID="cri-o://119a2dde28f540b924c782b375f546152653f4946183ef8bd88f8018368ad4a7" gracePeriod=2 Dec 03 00:21:27 crc kubenswrapper[4811]: I1203 00:21:27.961728 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x4vqf_e00ff47f-507a-4e6b-81cf-412e7eb4c1c7/registry-server/0.log" Dec 03 00:21:27 crc kubenswrapper[4811]: I1203 00:21:27.963538 4811 generic.go:334] "Generic (PLEG): container finished" podID="e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" containerID="119a2dde28f540b924c782b375f546152653f4946183ef8bd88f8018368ad4a7" exitCode=137 Dec 03 00:21:27 crc kubenswrapper[4811]: I1203 00:21:27.963593 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4vqf" event={"ID":"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7","Type":"ContainerDied","Data":"119a2dde28f540b924c782b375f546152653f4946183ef8bd88f8018368ad4a7"} Dec 03 00:21:28 crc kubenswrapper[4811]: I1203 00:21:28.203121 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x4vqf_e00ff47f-507a-4e6b-81cf-412e7eb4c1c7/registry-server/0.log" Dec 03 00:21:28 crc kubenswrapper[4811]: I1203 00:21:28.204291 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:21:28 crc kubenswrapper[4811]: I1203 00:21:28.346228 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4hpf\" (UniqueName: \"kubernetes.io/projected/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-kube-api-access-m4hpf\") pod \"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7\" (UID: \"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7\") " Dec 03 00:21:28 crc kubenswrapper[4811]: I1203 00:21:28.346310 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-utilities\") pod \"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7\" (UID: \"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7\") " Dec 03 00:21:28 crc kubenswrapper[4811]: I1203 00:21:28.346424 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-catalog-content\") pod \"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7\" (UID: \"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7\") " Dec 03 00:21:28 crc kubenswrapper[4811]: I1203 00:21:28.347525 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-utilities" (OuterVolumeSpecName: "utilities") pod "e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" (UID: "e00ff47f-507a-4e6b-81cf-412e7eb4c1c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:28 crc kubenswrapper[4811]: I1203 00:21:28.353434 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-kube-api-access-m4hpf" (OuterVolumeSpecName: "kube-api-access-m4hpf") pod "e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" (UID: "e00ff47f-507a-4e6b-81cf-412e7eb4c1c7"). InnerVolumeSpecName "kube-api-access-m4hpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:21:28 crc kubenswrapper[4811]: I1203 00:21:28.414726 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" (UID: "e00ff47f-507a-4e6b-81cf-412e7eb4c1c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:28 crc kubenswrapper[4811]: I1203 00:21:28.448237 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:28 crc kubenswrapper[4811]: I1203 00:21:28.448283 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4hpf\" (UniqueName: \"kubernetes.io/projected/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-kube-api-access-m4hpf\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:28 crc kubenswrapper[4811]: I1203 00:21:28.448296 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:28 crc kubenswrapper[4811]: I1203 00:21:28.972157 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x4vqf_e00ff47f-507a-4e6b-81cf-412e7eb4c1c7/registry-server/0.log" Dec 03 00:21:28 crc kubenswrapper[4811]: I1203 00:21:28.972969 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4vqf" event={"ID":"e00ff47f-507a-4e6b-81cf-412e7eb4c1c7","Type":"ContainerDied","Data":"b3ca7e48f3789f71e33b884fe009ea7a09417231f7a0e53fc74d29d5bc37d1e4"} Dec 03 00:21:28 crc kubenswrapper[4811]: I1203 00:21:28.973034 4811 scope.go:117] "RemoveContainer" containerID="119a2dde28f540b924c782b375f546152653f4946183ef8bd88f8018368ad4a7" Dec 03 00:21:28 crc kubenswrapper[4811]: I1203 00:21:28.973068 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4vqf" Dec 03 00:21:29 crc kubenswrapper[4811]: I1203 00:21:29.014027 4811 scope.go:117] "RemoveContainer" containerID="faed1019f7c7c8b81aa1d48a815cb34f3c2020634a433f0c24d92f750a5eb697" Dec 03 00:21:29 crc kubenswrapper[4811]: I1203 00:21:29.017385 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x4vqf"] Dec 03 00:21:29 crc kubenswrapper[4811]: I1203 00:21:29.025583 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x4vqf"] Dec 03 00:21:29 crc kubenswrapper[4811]: I1203 00:21:29.044433 4811 scope.go:117] "RemoveContainer" containerID="63448658b2b38d5d30f13ae02f8f0e6f5045c9e2a002f273d54743b3636e23f6" Dec 03 00:21:30 crc kubenswrapper[4811]: I1203 00:21:30.123562 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" path="/var/lib/kubelet/pods/e00ff47f-507a-4e6b-81cf-412e7eb4c1c7/volumes" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.436814 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7gj28"] Dec 03 00:21:39 crc kubenswrapper[4811]: E1203 00:21:39.437693 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad5aee2-ef50-47fa-963c-3510da5b8070" containerName="manage-dockerfile" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.437711 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad5aee2-ef50-47fa-963c-3510da5b8070" containerName="manage-dockerfile" Dec 03 00:21:39 crc kubenswrapper[4811]: E1203 00:21:39.437723 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" containerName="extract-utilities" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.437731 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" containerName="extract-utilities" Dec 03 00:21:39 crc kubenswrapper[4811]: E1203 00:21:39.437749 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad5aee2-ef50-47fa-963c-3510da5b8070" containerName="docker-build" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.437758 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad5aee2-ef50-47fa-963c-3510da5b8070" containerName="docker-build" Dec 03 00:21:39 crc kubenswrapper[4811]: E1203 00:21:39.437777 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" containerName="extract-content" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.437786 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" containerName="extract-content" Dec 03 00:21:39 crc kubenswrapper[4811]: E1203 00:21:39.437799 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" containerName="registry-server" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.437806 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" containerName="registry-server" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.437947 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ad5aee2-ef50-47fa-963c-3510da5b8070" containerName="docker-build" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.437961 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00ff47f-507a-4e6b-81cf-412e7eb4c1c7" containerName="registry-server" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.439148 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.443811 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gj28"] Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.507077 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhx4d\" (UniqueName: \"kubernetes.io/projected/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-kube-api-access-lhx4d\") pod \"certified-operators-7gj28\" (UID: \"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c\") " pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.507191 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-utilities\") pod \"certified-operators-7gj28\" (UID: \"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c\") " pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.507308 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-catalog-content\") pod \"certified-operators-7gj28\" (UID: \"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c\") " pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.609285 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhx4d\" (UniqueName: \"kubernetes.io/projected/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-kube-api-access-lhx4d\") pod \"certified-operators-7gj28\" (UID: \"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c\") " pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.609371 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-utilities\") pod \"certified-operators-7gj28\" (UID: \"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c\") " pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.609422 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-catalog-content\") pod \"certified-operators-7gj28\" (UID: \"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c\") " pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.610057 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-catalog-content\") pod \"certified-operators-7gj28\" (UID: \"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c\") " pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.610322 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-utilities\") pod \"certified-operators-7gj28\" (UID: \"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c\") " pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.636884 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhx4d\" (UniqueName: \"kubernetes.io/projected/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-kube-api-access-lhx4d\") pod \"certified-operators-7gj28\" (UID: \"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c\") " pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:21:39 crc kubenswrapper[4811]: I1203 00:21:39.757179 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:21:40 crc kubenswrapper[4811]: I1203 00:21:40.035307 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gj28"] Dec 03 00:21:41 crc kubenswrapper[4811]: I1203 00:21:41.069793 4811 generic.go:334] "Generic (PLEG): container finished" podID="f71fb1d7-b39f-4c92-ad5d-61c778d7b70c" containerID="71b9803013faf61946295b5d927757fca315d5285586ea349ca9bf49a5be3b2c" exitCode=0 Dec 03 00:21:41 crc kubenswrapper[4811]: I1203 00:21:41.069897 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gj28" event={"ID":"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c","Type":"ContainerDied","Data":"71b9803013faf61946295b5d927757fca315d5285586ea349ca9bf49a5be3b2c"} Dec 03 00:21:41 crc kubenswrapper[4811]: I1203 00:21:41.070382 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gj28" event={"ID":"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c","Type":"ContainerStarted","Data":"087d5ec45314c7770a951da56b33b1ecc238bb1be5bca90724fe1f9d9c8c2cd2"} Dec 03 00:21:43 crc kubenswrapper[4811]: I1203 00:21:43.088805 4811 generic.go:334] "Generic (PLEG): container finished" podID="f71fb1d7-b39f-4c92-ad5d-61c778d7b70c" containerID="c876573b5634ce2f29f75c9529de7d80ddde4dab0fd97f3d95719d8cf5dcf8f1" exitCode=0 Dec 03 00:21:43 crc kubenswrapper[4811]: I1203 00:21:43.088879 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gj28" event={"ID":"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c","Type":"ContainerDied","Data":"c876573b5634ce2f29f75c9529de7d80ddde4dab0fd97f3d95719d8cf5dcf8f1"} Dec 03 00:21:47 crc kubenswrapper[4811]: I1203 00:21:47.123786 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gj28" event={"ID":"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c","Type":"ContainerStarted","Data":"0a53b5ea645fa236fc40f14d3bc1be7b9e574f05b817ba5600eca38b2f3e56f6"} Dec 03 00:21:47 crc kubenswrapper[4811]: I1203 00:21:47.145873 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7gj28" podStartSLOduration=3.032800657 podStartE2EDuration="8.14585142s" podCreationTimestamp="2025-12-03 00:21:39 +0000 UTC" firstStartedPulling="2025-12-03 00:21:41.072125062 +0000 UTC m=+941.213954554" lastFinishedPulling="2025-12-03 00:21:46.185175835 +0000 UTC m=+946.327005317" observedRunningTime="2025-12-03 00:21:47.143854889 +0000 UTC m=+947.285684371" watchObservedRunningTime="2025-12-03 00:21:47.14585142 +0000 UTC m=+947.287680912" Dec 03 00:21:49 crc kubenswrapper[4811]: I1203 00:21:49.758136 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:21:49 crc kubenswrapper[4811]: I1203 00:21:49.758828 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:21:49 crc kubenswrapper[4811]: I1203 00:21:49.829601 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:21:59 crc kubenswrapper[4811]: I1203 00:21:59.798530 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:21:59 crc kubenswrapper[4811]: I1203 00:21:59.859805 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7gj28"] Dec 03 00:22:00 crc kubenswrapper[4811]: I1203 00:22:00.232702 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7gj28" podUID="f71fb1d7-b39f-4c92-ad5d-61c778d7b70c" containerName="registry-server" containerID="cri-o://0a53b5ea645fa236fc40f14d3bc1be7b9e574f05b817ba5600eca38b2f3e56f6" gracePeriod=2 Dec 03 00:22:01 crc kubenswrapper[4811]: I1203 00:22:01.764966 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:22:01 crc kubenswrapper[4811]: I1203 00:22:01.790173 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-utilities\") pod \"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c\" (UID: \"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c\") " Dec 03 00:22:01 crc kubenswrapper[4811]: I1203 00:22:01.790473 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhx4d\" (UniqueName: \"kubernetes.io/projected/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-kube-api-access-lhx4d\") pod \"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c\" (UID: \"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c\") " Dec 03 00:22:01 crc kubenswrapper[4811]: I1203 00:22:01.790560 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-catalog-content\") pod \"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c\" (UID: \"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c\") " Dec 03 00:22:01 crc kubenswrapper[4811]: I1203 00:22:01.791798 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-utilities" (OuterVolumeSpecName: "utilities") pod "f71fb1d7-b39f-4c92-ad5d-61c778d7b70c" (UID: "f71fb1d7-b39f-4c92-ad5d-61c778d7b70c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:22:01 crc kubenswrapper[4811]: I1203 00:22:01.801070 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-kube-api-access-lhx4d" (OuterVolumeSpecName: "kube-api-access-lhx4d") pod "f71fb1d7-b39f-4c92-ad5d-61c778d7b70c" (UID: "f71fb1d7-b39f-4c92-ad5d-61c778d7b70c"). InnerVolumeSpecName "kube-api-access-lhx4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:22:01 crc kubenswrapper[4811]: I1203 00:22:01.847185 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f71fb1d7-b39f-4c92-ad5d-61c778d7b70c" (UID: "f71fb1d7-b39f-4c92-ad5d-61c778d7b70c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:22:01 crc kubenswrapper[4811]: I1203 00:22:01.891971 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhx4d\" (UniqueName: \"kubernetes.io/projected/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-kube-api-access-lhx4d\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:01 crc kubenswrapper[4811]: I1203 00:22:01.892009 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:01 crc kubenswrapper[4811]: I1203 00:22:01.892018 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:02 crc kubenswrapper[4811]: I1203 00:22:02.246577 4811 generic.go:334] "Generic (PLEG): container finished" podID="f71fb1d7-b39f-4c92-ad5d-61c778d7b70c" containerID="0a53b5ea645fa236fc40f14d3bc1be7b9e574f05b817ba5600eca38b2f3e56f6" exitCode=0 Dec 03 00:22:02 crc kubenswrapper[4811]: I1203 00:22:02.246683 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gj28" event={"ID":"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c","Type":"ContainerDied","Data":"0a53b5ea645fa236fc40f14d3bc1be7b9e574f05b817ba5600eca38b2f3e56f6"} Dec 03 00:22:02 crc kubenswrapper[4811]: I1203 00:22:02.247064 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gj28" event={"ID":"f71fb1d7-b39f-4c92-ad5d-61c778d7b70c","Type":"ContainerDied","Data":"087d5ec45314c7770a951da56b33b1ecc238bb1be5bca90724fe1f9d9c8c2cd2"} Dec 03 00:22:02 crc kubenswrapper[4811]: I1203 00:22:02.247102 4811 scope.go:117] "RemoveContainer" containerID="0a53b5ea645fa236fc40f14d3bc1be7b9e574f05b817ba5600eca38b2f3e56f6" Dec 03 00:22:02 crc kubenswrapper[4811]: I1203 00:22:02.246765 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gj28" Dec 03 00:22:02 crc kubenswrapper[4811]: E1203 00:22:02.262756 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf71fb1d7_b39f_4c92_ad5d_61c778d7b70c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf71fb1d7_b39f_4c92_ad5d_61c778d7b70c.slice/crio-087d5ec45314c7770a951da56b33b1ecc238bb1be5bca90724fe1f9d9c8c2cd2\": RecentStats: unable to find data in memory cache]" Dec 03 00:22:02 crc kubenswrapper[4811]: I1203 00:22:02.272370 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7gj28"] Dec 03 00:22:02 crc kubenswrapper[4811]: I1203 00:22:02.276625 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7gj28"] Dec 03 00:22:02 crc kubenswrapper[4811]: I1203 00:22:02.277745 4811 scope.go:117] "RemoveContainer" containerID="c876573b5634ce2f29f75c9529de7d80ddde4dab0fd97f3d95719d8cf5dcf8f1" Dec 03 00:22:02 crc kubenswrapper[4811]: I1203 00:22:02.294822 4811 scope.go:117] "RemoveContainer" containerID="71b9803013faf61946295b5d927757fca315d5285586ea349ca9bf49a5be3b2c" Dec 03 00:22:02 crc kubenswrapper[4811]: I1203 00:22:02.331521 4811 scope.go:117] "RemoveContainer" containerID="0a53b5ea645fa236fc40f14d3bc1be7b9e574f05b817ba5600eca38b2f3e56f6" Dec 03 00:22:02 crc kubenswrapper[4811]: E1203 00:22:02.333234 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a53b5ea645fa236fc40f14d3bc1be7b9e574f05b817ba5600eca38b2f3e56f6\": container with ID starting with 0a53b5ea645fa236fc40f14d3bc1be7b9e574f05b817ba5600eca38b2f3e56f6 not found: ID does not exist" containerID="0a53b5ea645fa236fc40f14d3bc1be7b9e574f05b817ba5600eca38b2f3e56f6" Dec 03 00:22:02 crc kubenswrapper[4811]: I1203 00:22:02.333329 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a53b5ea645fa236fc40f14d3bc1be7b9e574f05b817ba5600eca38b2f3e56f6"} err="failed to get container status \"0a53b5ea645fa236fc40f14d3bc1be7b9e574f05b817ba5600eca38b2f3e56f6\": rpc error: code = NotFound desc = could not find container \"0a53b5ea645fa236fc40f14d3bc1be7b9e574f05b817ba5600eca38b2f3e56f6\": container with ID starting with 0a53b5ea645fa236fc40f14d3bc1be7b9e574f05b817ba5600eca38b2f3e56f6 not found: ID does not exist" Dec 03 00:22:02 crc kubenswrapper[4811]: I1203 00:22:02.333377 4811 scope.go:117] "RemoveContainer" containerID="c876573b5634ce2f29f75c9529de7d80ddde4dab0fd97f3d95719d8cf5dcf8f1" Dec 03 00:22:02 crc kubenswrapper[4811]: E1203 00:22:02.333895 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c876573b5634ce2f29f75c9529de7d80ddde4dab0fd97f3d95719d8cf5dcf8f1\": container with ID starting with c876573b5634ce2f29f75c9529de7d80ddde4dab0fd97f3d95719d8cf5dcf8f1 not found: ID does not exist" containerID="c876573b5634ce2f29f75c9529de7d80ddde4dab0fd97f3d95719d8cf5dcf8f1" Dec 03 00:22:02 crc kubenswrapper[4811]: I1203 00:22:02.333936 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c876573b5634ce2f29f75c9529de7d80ddde4dab0fd97f3d95719d8cf5dcf8f1"} err="failed to get container status \"c876573b5634ce2f29f75c9529de7d80ddde4dab0fd97f3d95719d8cf5dcf8f1\": rpc error: code = NotFound desc = could not find container \"c876573b5634ce2f29f75c9529de7d80ddde4dab0fd97f3d95719d8cf5dcf8f1\": container with ID starting with c876573b5634ce2f29f75c9529de7d80ddde4dab0fd97f3d95719d8cf5dcf8f1 not found: ID does not exist" Dec 03 00:22:02 crc kubenswrapper[4811]: I1203 00:22:02.333969 4811 scope.go:117] "RemoveContainer" containerID="71b9803013faf61946295b5d927757fca315d5285586ea349ca9bf49a5be3b2c" Dec 03 00:22:02 crc kubenswrapper[4811]: E1203 00:22:02.334422 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71b9803013faf61946295b5d927757fca315d5285586ea349ca9bf49a5be3b2c\": container with ID starting with 71b9803013faf61946295b5d927757fca315d5285586ea349ca9bf49a5be3b2c not found: ID does not exist" containerID="71b9803013faf61946295b5d927757fca315d5285586ea349ca9bf49a5be3b2c" Dec 03 00:22:02 crc kubenswrapper[4811]: I1203 00:22:02.334441 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71b9803013faf61946295b5d927757fca315d5285586ea349ca9bf49a5be3b2c"} err="failed to get container status \"71b9803013faf61946295b5d927757fca315d5285586ea349ca9bf49a5be3b2c\": rpc error: code = NotFound desc = could not find container \"71b9803013faf61946295b5d927757fca315d5285586ea349ca9bf49a5be3b2c\": container with ID starting with 71b9803013faf61946295b5d927757fca315d5285586ea349ca9bf49a5be3b2c not found: ID does not exist" Dec 03 00:22:04 crc kubenswrapper[4811]: I1203 00:22:04.123145 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f71fb1d7-b39f-4c92-ad5d-61c778d7b70c" path="/var/lib/kubelet/pods/f71fb1d7-b39f-4c92-ad5d-61c778d7b70c/volumes" Dec 03 00:22:07 crc kubenswrapper[4811]: I1203 00:22:07.833031 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bbcnv"] Dec 03 00:22:07 crc kubenswrapper[4811]: E1203 00:22:07.833806 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71fb1d7-b39f-4c92-ad5d-61c778d7b70c" containerName="extract-content" Dec 03 00:22:07 crc kubenswrapper[4811]: I1203 00:22:07.833825 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71fb1d7-b39f-4c92-ad5d-61c778d7b70c" containerName="extract-content" Dec 03 00:22:07 crc kubenswrapper[4811]: E1203 00:22:07.833846 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71fb1d7-b39f-4c92-ad5d-61c778d7b70c" containerName="extract-utilities" Dec 03 00:22:07 crc kubenswrapper[4811]: I1203 00:22:07.833854 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71fb1d7-b39f-4c92-ad5d-61c778d7b70c" containerName="extract-utilities" Dec 03 00:22:07 crc kubenswrapper[4811]: E1203 00:22:07.833871 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71fb1d7-b39f-4c92-ad5d-61c778d7b70c" containerName="registry-server" Dec 03 00:22:07 crc kubenswrapper[4811]: I1203 00:22:07.833879 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71fb1d7-b39f-4c92-ad5d-61c778d7b70c" containerName="registry-server" Dec 03 00:22:07 crc kubenswrapper[4811]: I1203 00:22:07.834022 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f71fb1d7-b39f-4c92-ad5d-61c778d7b70c" containerName="registry-server" Dec 03 00:22:07 crc kubenswrapper[4811]: I1203 00:22:07.835072 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:07 crc kubenswrapper[4811]: I1203 00:22:07.853967 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bbcnv"] Dec 03 00:22:08 crc kubenswrapper[4811]: I1203 00:22:08.001571 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90380e99-834f-4df3-a099-77ff2dfb5f61-utilities\") pod \"redhat-operators-bbcnv\" (UID: \"90380e99-834f-4df3-a099-77ff2dfb5f61\") " pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:08 crc kubenswrapper[4811]: I1203 00:22:08.001736 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90380e99-834f-4df3-a099-77ff2dfb5f61-catalog-content\") pod \"redhat-operators-bbcnv\" (UID: \"90380e99-834f-4df3-a099-77ff2dfb5f61\") " pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:08 crc kubenswrapper[4811]: I1203 00:22:08.001897 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9pps\" (UniqueName: \"kubernetes.io/projected/90380e99-834f-4df3-a099-77ff2dfb5f61-kube-api-access-q9pps\") pod \"redhat-operators-bbcnv\" (UID: \"90380e99-834f-4df3-a099-77ff2dfb5f61\") " pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:08 crc kubenswrapper[4811]: I1203 00:22:08.102995 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9pps\" (UniqueName: \"kubernetes.io/projected/90380e99-834f-4df3-a099-77ff2dfb5f61-kube-api-access-q9pps\") pod \"redhat-operators-bbcnv\" (UID: \"90380e99-834f-4df3-a099-77ff2dfb5f61\") " pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:08 crc kubenswrapper[4811]: I1203 00:22:08.103078 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90380e99-834f-4df3-a099-77ff2dfb5f61-utilities\") pod \"redhat-operators-bbcnv\" (UID: \"90380e99-834f-4df3-a099-77ff2dfb5f61\") " pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:08 crc kubenswrapper[4811]: I1203 00:22:08.103142 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90380e99-834f-4df3-a099-77ff2dfb5f61-catalog-content\") pod \"redhat-operators-bbcnv\" (UID: \"90380e99-834f-4df3-a099-77ff2dfb5f61\") " pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:08 crc kubenswrapper[4811]: I1203 00:22:08.103687 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90380e99-834f-4df3-a099-77ff2dfb5f61-utilities\") pod \"redhat-operators-bbcnv\" (UID: \"90380e99-834f-4df3-a099-77ff2dfb5f61\") " pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:08 crc kubenswrapper[4811]: I1203 00:22:08.103785 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90380e99-834f-4df3-a099-77ff2dfb5f61-catalog-content\") pod \"redhat-operators-bbcnv\" (UID: \"90380e99-834f-4df3-a099-77ff2dfb5f61\") " pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:08 crc kubenswrapper[4811]: I1203 00:22:08.125240 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9pps\" (UniqueName: \"kubernetes.io/projected/90380e99-834f-4df3-a099-77ff2dfb5f61-kube-api-access-q9pps\") pod \"redhat-operators-bbcnv\" (UID: \"90380e99-834f-4df3-a099-77ff2dfb5f61\") " pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:08 crc kubenswrapper[4811]: I1203 00:22:08.154446 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:08 crc kubenswrapper[4811]: I1203 00:22:08.447943 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bbcnv"] Dec 03 00:22:09 crc kubenswrapper[4811]: I1203 00:22:09.338175 4811 generic.go:334] "Generic (PLEG): container finished" podID="90380e99-834f-4df3-a099-77ff2dfb5f61" containerID="78b04ffd7ecfc823fdab25374daa1a42e4977fd9ea4806c443430fb6266b3fec" exitCode=0 Dec 03 00:22:09 crc kubenswrapper[4811]: I1203 00:22:09.338246 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbcnv" event={"ID":"90380e99-834f-4df3-a099-77ff2dfb5f61","Type":"ContainerDied","Data":"78b04ffd7ecfc823fdab25374daa1a42e4977fd9ea4806c443430fb6266b3fec"} Dec 03 00:22:09 crc kubenswrapper[4811]: I1203 00:22:09.338532 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbcnv" event={"ID":"90380e99-834f-4df3-a099-77ff2dfb5f61","Type":"ContainerStarted","Data":"c99ab286068b79a7df5288ab003e21be2f38b23e53964830d3b62955e13fdf85"} Dec 03 00:22:10 crc kubenswrapper[4811]: I1203 00:22:10.345853 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbcnv" event={"ID":"90380e99-834f-4df3-a099-77ff2dfb5f61","Type":"ContainerStarted","Data":"298f30e8bb8adc440953dc7a5c8efaa836fee56927d02c8a5ef0f2cbb14bb805"} Dec 03 00:22:11 crc kubenswrapper[4811]: I1203 00:22:11.353624 4811 generic.go:334] "Generic (PLEG): container finished" podID="90380e99-834f-4df3-a099-77ff2dfb5f61" containerID="298f30e8bb8adc440953dc7a5c8efaa836fee56927d02c8a5ef0f2cbb14bb805" exitCode=0 Dec 03 00:22:11 crc kubenswrapper[4811]: I1203 00:22:11.353704 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbcnv" event={"ID":"90380e99-834f-4df3-a099-77ff2dfb5f61","Type":"ContainerDied","Data":"298f30e8bb8adc440953dc7a5c8efaa836fee56927d02c8a5ef0f2cbb14bb805"} Dec 03 00:22:11 crc kubenswrapper[4811]: I1203 00:22:11.355862 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 00:22:13 crc kubenswrapper[4811]: I1203 00:22:13.369193 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbcnv" event={"ID":"90380e99-834f-4df3-a099-77ff2dfb5f61","Type":"ContainerStarted","Data":"afe888a1d259baceca39b609c1654c4c704a2d7c876e617202df6cf3cf08b4ed"} Dec 03 00:22:13 crc kubenswrapper[4811]: I1203 00:22:13.388312 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bbcnv" podStartSLOduration=3.464650057 podStartE2EDuration="6.388288626s" podCreationTimestamp="2025-12-03 00:22:07 +0000 UTC" firstStartedPulling="2025-12-03 00:22:09.340436158 +0000 UTC m=+969.482265670" lastFinishedPulling="2025-12-03 00:22:12.264074757 +0000 UTC m=+972.405904239" observedRunningTime="2025-12-03 00:22:13.384502967 +0000 UTC m=+973.526332459" watchObservedRunningTime="2025-12-03 00:22:13.388288626 +0000 UTC m=+973.530118108" Dec 03 00:22:18 crc kubenswrapper[4811]: I1203 00:22:18.155739 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:18 crc kubenswrapper[4811]: I1203 00:22:18.156360 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:18 crc kubenswrapper[4811]: I1203 00:22:18.220883 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:18 crc kubenswrapper[4811]: I1203 00:22:18.485553 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:18 crc kubenswrapper[4811]: I1203 00:22:18.525416 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bbcnv"] Dec 03 00:22:20 crc kubenswrapper[4811]: I1203 00:22:20.412986 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bbcnv" podUID="90380e99-834f-4df3-a099-77ff2dfb5f61" containerName="registry-server" containerID="cri-o://afe888a1d259baceca39b609c1654c4c704a2d7c876e617202df6cf3cf08b4ed" gracePeriod=2 Dec 03 00:22:26 crc kubenswrapper[4811]: I1203 00:22:26.452200 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bbcnv_90380e99-834f-4df3-a099-77ff2dfb5f61/registry-server/0.log" Dec 03 00:22:26 crc kubenswrapper[4811]: I1203 00:22:26.453570 4811 generic.go:334] "Generic (PLEG): container finished" podID="90380e99-834f-4df3-a099-77ff2dfb5f61" containerID="afe888a1d259baceca39b609c1654c4c704a2d7c876e617202df6cf3cf08b4ed" exitCode=137 Dec 03 00:22:26 crc kubenswrapper[4811]: I1203 00:22:26.453606 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbcnv" event={"ID":"90380e99-834f-4df3-a099-77ff2dfb5f61","Type":"ContainerDied","Data":"afe888a1d259baceca39b609c1654c4c704a2d7c876e617202df6cf3cf08b4ed"} Dec 03 00:22:26 crc kubenswrapper[4811]: I1203 00:22:26.661228 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bbcnv_90380e99-834f-4df3-a099-77ff2dfb5f61/registry-server/0.log" Dec 03 00:22:26 crc kubenswrapper[4811]: I1203 00:22:26.662567 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:26 crc kubenswrapper[4811]: I1203 00:22:26.679221 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90380e99-834f-4df3-a099-77ff2dfb5f61-utilities\") pod \"90380e99-834f-4df3-a099-77ff2dfb5f61\" (UID: \"90380e99-834f-4df3-a099-77ff2dfb5f61\") " Dec 03 00:22:26 crc kubenswrapper[4811]: I1203 00:22:26.679320 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9pps\" (UniqueName: \"kubernetes.io/projected/90380e99-834f-4df3-a099-77ff2dfb5f61-kube-api-access-q9pps\") pod \"90380e99-834f-4df3-a099-77ff2dfb5f61\" (UID: \"90380e99-834f-4df3-a099-77ff2dfb5f61\") " Dec 03 00:22:26 crc kubenswrapper[4811]: I1203 00:22:26.679433 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90380e99-834f-4df3-a099-77ff2dfb5f61-catalog-content\") pod \"90380e99-834f-4df3-a099-77ff2dfb5f61\" (UID: \"90380e99-834f-4df3-a099-77ff2dfb5f61\") " Dec 03 00:22:26 crc kubenswrapper[4811]: I1203 00:22:26.680477 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90380e99-834f-4df3-a099-77ff2dfb5f61-utilities" (OuterVolumeSpecName: "utilities") pod "90380e99-834f-4df3-a099-77ff2dfb5f61" (UID: "90380e99-834f-4df3-a099-77ff2dfb5f61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:22:26 crc kubenswrapper[4811]: I1203 00:22:26.700456 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90380e99-834f-4df3-a099-77ff2dfb5f61-kube-api-access-q9pps" (OuterVolumeSpecName: "kube-api-access-q9pps") pod "90380e99-834f-4df3-a099-77ff2dfb5f61" (UID: "90380e99-834f-4df3-a099-77ff2dfb5f61"). InnerVolumeSpecName "kube-api-access-q9pps". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:22:26 crc kubenswrapper[4811]: I1203 00:22:26.782725 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90380e99-834f-4df3-a099-77ff2dfb5f61-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:26 crc kubenswrapper[4811]: I1203 00:22:26.782772 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9pps\" (UniqueName: \"kubernetes.io/projected/90380e99-834f-4df3-a099-77ff2dfb5f61-kube-api-access-q9pps\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:26 crc kubenswrapper[4811]: I1203 00:22:26.837477 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90380e99-834f-4df3-a099-77ff2dfb5f61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90380e99-834f-4df3-a099-77ff2dfb5f61" (UID: "90380e99-834f-4df3-a099-77ff2dfb5f61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:22:26 crc kubenswrapper[4811]: I1203 00:22:26.884463 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90380e99-834f-4df3-a099-77ff2dfb5f61-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:27 crc kubenswrapper[4811]: I1203 00:22:27.460902 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bbcnv_90380e99-834f-4df3-a099-77ff2dfb5f61/registry-server/0.log" Dec 03 00:22:27 crc kubenswrapper[4811]: I1203 00:22:27.461649 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbcnv" event={"ID":"90380e99-834f-4df3-a099-77ff2dfb5f61","Type":"ContainerDied","Data":"c99ab286068b79a7df5288ab003e21be2f38b23e53964830d3b62955e13fdf85"} Dec 03 00:22:27 crc kubenswrapper[4811]: I1203 00:22:27.461700 4811 scope.go:117] "RemoveContainer" containerID="afe888a1d259baceca39b609c1654c4c704a2d7c876e617202df6cf3cf08b4ed" Dec 03 00:22:27 crc kubenswrapper[4811]: I1203 00:22:27.461718 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbcnv" Dec 03 00:22:27 crc kubenswrapper[4811]: I1203 00:22:27.489109 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bbcnv"] Dec 03 00:22:27 crc kubenswrapper[4811]: I1203 00:22:27.489478 4811 scope.go:117] "RemoveContainer" containerID="298f30e8bb8adc440953dc7a5c8efaa836fee56927d02c8a5ef0f2cbb14bb805" Dec 03 00:22:27 crc kubenswrapper[4811]: I1203 00:22:27.498492 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bbcnv"] Dec 03 00:22:27 crc kubenswrapper[4811]: I1203 00:22:27.526977 4811 scope.go:117] "RemoveContainer" containerID="78b04ffd7ecfc823fdab25374daa1a42e4977fd9ea4806c443430fb6266b3fec" Dec 03 00:22:28 crc kubenswrapper[4811]: I1203 00:22:28.124765 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90380e99-834f-4df3-a099-77ff2dfb5f61" path="/var/lib/kubelet/pods/90380e99-834f-4df3-a099-77ff2dfb5f61/volumes" Dec 03 00:22:32 crc kubenswrapper[4811]: I1203 00:22:32.940361 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:22:32 crc kubenswrapper[4811]: I1203 00:22:32.941203 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:22:50 crc kubenswrapper[4811]: I1203 00:22:50.666848 4811 generic.go:334] "Generic (PLEG): container finished" podID="4ece903b-ffa0-4dcd-8116-38e482b406e6" containerID="c2fef2345025e126214fb514d860207168c276c65f8ef28081dfde543c19fe57" exitCode=0 Dec 03 00:22:50 crc kubenswrapper[4811]: I1203 00:22:50.666901 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"4ece903b-ffa0-4dcd-8116-38e482b406e6","Type":"ContainerDied","Data":"c2fef2345025e126214fb514d860207168c276c65f8ef28081dfde543c19fe57"} Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.057595 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.231977 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-container-storage-run\") pod \"4ece903b-ffa0-4dcd-8116-38e482b406e6\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.232386 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-buildworkdir\") pod \"4ece903b-ffa0-4dcd-8116-38e482b406e6\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.232426 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-container-storage-root\") pod \"4ece903b-ffa0-4dcd-8116-38e482b406e6\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.232459 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/4ece903b-ffa0-4dcd-8116-38e482b406e6-builder-dockercfg-xrr5t-push\") pod \"4ece903b-ffa0-4dcd-8116-38e482b406e6\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.232490 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-system-configs\") pod \"4ece903b-ffa0-4dcd-8116-38e482b406e6\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.232516 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ece903b-ffa0-4dcd-8116-38e482b406e6-buildcachedir\") pod \"4ece903b-ffa0-4dcd-8116-38e482b406e6\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.232542 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-ca-bundles\") pod \"4ece903b-ffa0-4dcd-8116-38e482b406e6\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.232631 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ece903b-ffa0-4dcd-8116-38e482b406e6-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "4ece903b-ffa0-4dcd-8116-38e482b406e6" (UID: "4ece903b-ffa0-4dcd-8116-38e482b406e6"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.232782 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ece903b-ffa0-4dcd-8116-38e482b406e6-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "4ece903b-ffa0-4dcd-8116-38e482b406e6" (UID: "4ece903b-ffa0-4dcd-8116-38e482b406e6"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.233052 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "4ece903b-ffa0-4dcd-8116-38e482b406e6" (UID: "4ece903b-ffa0-4dcd-8116-38e482b406e6"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.232575 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ece903b-ffa0-4dcd-8116-38e482b406e6-node-pullsecrets\") pod \"4ece903b-ffa0-4dcd-8116-38e482b406e6\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.233579 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "4ece903b-ffa0-4dcd-8116-38e482b406e6" (UID: "4ece903b-ffa0-4dcd-8116-38e482b406e6"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.233628 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "4ece903b-ffa0-4dcd-8116-38e482b406e6" (UID: "4ece903b-ffa0-4dcd-8116-38e482b406e6"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.233691 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/4ece903b-ffa0-4dcd-8116-38e482b406e6-builder-dockercfg-xrr5t-pull\") pod \"4ece903b-ffa0-4dcd-8116-38e482b406e6\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.233869 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-proxy-ca-bundles\") pod \"4ece903b-ffa0-4dcd-8116-38e482b406e6\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.233942 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-blob-cache\") pod \"4ece903b-ffa0-4dcd-8116-38e482b406e6\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.234058 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwxv\" (UniqueName: \"kubernetes.io/projected/4ece903b-ffa0-4dcd-8116-38e482b406e6-kube-api-access-nzwxv\") pod \"4ece903b-ffa0-4dcd-8116-38e482b406e6\" (UID: \"4ece903b-ffa0-4dcd-8116-38e482b406e6\") " Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.234568 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "4ece903b-ffa0-4dcd-8116-38e482b406e6" (UID: "4ece903b-ffa0-4dcd-8116-38e482b406e6"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.236129 4811 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ece903b-ffa0-4dcd-8116-38e482b406e6-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.236178 4811 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.236200 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.236779 4811 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.236882 4811 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ece903b-ffa0-4dcd-8116-38e482b406e6-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.236924 4811 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.238282 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "4ece903b-ffa0-4dcd-8116-38e482b406e6" (UID: "4ece903b-ffa0-4dcd-8116-38e482b406e6"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.261484 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ece903b-ffa0-4dcd-8116-38e482b406e6-builder-dockercfg-xrr5t-pull" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-pull") pod "4ece903b-ffa0-4dcd-8116-38e482b406e6" (UID: "4ece903b-ffa0-4dcd-8116-38e482b406e6"). InnerVolumeSpecName "builder-dockercfg-xrr5t-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.269654 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ece903b-ffa0-4dcd-8116-38e482b406e6-builder-dockercfg-xrr5t-push" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-push") pod "4ece903b-ffa0-4dcd-8116-38e482b406e6" (UID: "4ece903b-ffa0-4dcd-8116-38e482b406e6"). InnerVolumeSpecName "builder-dockercfg-xrr5t-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.269801 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ece903b-ffa0-4dcd-8116-38e482b406e6-kube-api-access-nzwxv" (OuterVolumeSpecName: "kube-api-access-nzwxv") pod "4ece903b-ffa0-4dcd-8116-38e482b406e6" (UID: "4ece903b-ffa0-4dcd-8116-38e482b406e6"). InnerVolumeSpecName "kube-api-access-nzwxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.343969 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwxv\" (UniqueName: \"kubernetes.io/projected/4ece903b-ffa0-4dcd-8116-38e482b406e6-kube-api-access-nzwxv\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.344033 4811 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.344050 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/4ece903b-ffa0-4dcd-8116-38e482b406e6-builder-dockercfg-xrr5t-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.344064 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/4ece903b-ffa0-4dcd-8116-38e482b406e6-builder-dockercfg-xrr5t-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.498852 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "4ece903b-ffa0-4dcd-8116-38e482b406e6" (UID: "4ece903b-ffa0-4dcd-8116-38e482b406e6"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.549165 4811 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.688194 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"4ece903b-ffa0-4dcd-8116-38e482b406e6","Type":"ContainerDied","Data":"2b0ef26f6be97652386059e41abeb9f57d4dc3560ab401f8810f5a02effd8b01"} Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.688246 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b0ef26f6be97652386059e41abeb9f57d4dc3560ab401f8810f5a02effd8b01" Dec 03 00:22:52 crc kubenswrapper[4811]: I1203 00:22:52.688311 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:22:54 crc kubenswrapper[4811]: I1203 00:22:54.676737 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "4ece903b-ffa0-4dcd-8116-38e482b406e6" (UID: "4ece903b-ffa0-4dcd-8116-38e482b406e6"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:22:54 crc kubenswrapper[4811]: I1203 00:22:54.689346 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ece903b-ffa0-4dcd-8116-38e482b406e6-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.619305 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:22:57 crc kubenswrapper[4811]: E1203 00:22:57.619846 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90380e99-834f-4df3-a099-77ff2dfb5f61" containerName="extract-content" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.619862 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="90380e99-834f-4df3-a099-77ff2dfb5f61" containerName="extract-content" Dec 03 00:22:57 crc kubenswrapper[4811]: E1203 00:22:57.619882 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90380e99-834f-4df3-a099-77ff2dfb5f61" containerName="registry-server" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.619890 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="90380e99-834f-4df3-a099-77ff2dfb5f61" containerName="registry-server" Dec 03 00:22:57 crc kubenswrapper[4811]: E1203 00:22:57.619922 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ece903b-ffa0-4dcd-8116-38e482b406e6" containerName="manage-dockerfile" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.619930 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ece903b-ffa0-4dcd-8116-38e482b406e6" containerName="manage-dockerfile" Dec 03 00:22:57 crc kubenswrapper[4811]: E1203 00:22:57.619948 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90380e99-834f-4df3-a099-77ff2dfb5f61" containerName="extract-utilities" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.619955 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="90380e99-834f-4df3-a099-77ff2dfb5f61" containerName="extract-utilities" Dec 03 00:22:57 crc kubenswrapper[4811]: E1203 00:22:57.619969 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ece903b-ffa0-4dcd-8116-38e482b406e6" containerName="docker-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.619976 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ece903b-ffa0-4dcd-8116-38e482b406e6" containerName="docker-build" Dec 03 00:22:57 crc kubenswrapper[4811]: E1203 00:22:57.619988 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ece903b-ffa0-4dcd-8116-38e482b406e6" containerName="git-clone" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.619995 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ece903b-ffa0-4dcd-8116-38e482b406e6" containerName="git-clone" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.620107 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ece903b-ffa0-4dcd-8116-38e482b406e6" containerName="docker-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.620115 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="90380e99-834f-4df3-a099-77ff2dfb5f61" containerName="registry-server" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.620818 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.623574 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.623645 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.624035 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-xrr5t" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.624989 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.637910 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.730390 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.730434 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/1794377a-5ef7-4a07-825a-e1b2c51cec9f-builder-dockercfg-xrr5t-pull\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.730489 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.730505 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-buildworkdir\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.730530 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-container-storage-run\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.730619 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.730644 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1794377a-5ef7-4a07-825a-e1b2c51cec9f-buildcachedir\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.730665 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwxjh\" (UniqueName: \"kubernetes.io/projected/1794377a-5ef7-4a07-825a-e1b2c51cec9f-kube-api-access-hwxjh\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.730685 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-system-configs\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.730771 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1794377a-5ef7-4a07-825a-e1b2c51cec9f-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.730804 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/1794377a-5ef7-4a07-825a-e1b2c51cec9f-builder-dockercfg-xrr5t-push\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.730833 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-container-storage-root\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832148 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1794377a-5ef7-4a07-825a-e1b2c51cec9f-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832216 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/1794377a-5ef7-4a07-825a-e1b2c51cec9f-builder-dockercfg-xrr5t-push\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832254 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-container-storage-root\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832304 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832324 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/1794377a-5ef7-4a07-825a-e1b2c51cec9f-builder-dockercfg-xrr5t-pull\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832332 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1794377a-5ef7-4a07-825a-e1b2c51cec9f-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832349 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-buildworkdir\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832372 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832402 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-container-storage-run\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832443 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832467 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1794377a-5ef7-4a07-825a-e1b2c51cec9f-buildcachedir\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832490 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwxjh\" (UniqueName: \"kubernetes.io/projected/1794377a-5ef7-4a07-825a-e1b2c51cec9f-kube-api-access-hwxjh\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832520 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-system-configs\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832814 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832926 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-buildworkdir\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832971 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1794377a-5ef7-4a07-825a-e1b2c51cec9f-buildcachedir\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.832936 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-container-storage-root\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.833222 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-system-configs\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.833875 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.833882 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.834012 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-container-storage-run\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.838065 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/1794377a-5ef7-4a07-825a-e1b2c51cec9f-builder-dockercfg-xrr5t-push\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.839605 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/1794377a-5ef7-4a07-825a-e1b2c51cec9f-builder-dockercfg-xrr5t-pull\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.849651 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwxjh\" (UniqueName: \"kubernetes.io/projected/1794377a-5ef7-4a07-825a-e1b2c51cec9f-kube-api-access-hwxjh\") pod \"sg-core-1-build\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:22:57 crc kubenswrapper[4811]: I1203 00:22:57.936861 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 03 00:22:58 crc kubenswrapper[4811]: I1203 00:22:58.422119 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:22:58 crc kubenswrapper[4811]: I1203 00:22:58.732894 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"1794377a-5ef7-4a07-825a-e1b2c51cec9f","Type":"ContainerStarted","Data":"edc348680ea34178a0f75825ff7fc5707f7a23a1006e39eeba96a0ad85875271"} Dec 03 00:22:59 crc kubenswrapper[4811]: I1203 00:22:59.745948 4811 generic.go:334] "Generic (PLEG): container finished" podID="1794377a-5ef7-4a07-825a-e1b2c51cec9f" containerID="36cca760ca21637dbae5a8f9eff71b0aeb808c2971d2260e2a5ddf3a56b341c1" exitCode=0 Dec 03 00:22:59 crc kubenswrapper[4811]: I1203 00:22:59.746037 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"1794377a-5ef7-4a07-825a-e1b2c51cec9f","Type":"ContainerDied","Data":"36cca760ca21637dbae5a8f9eff71b0aeb808c2971d2260e2a5ddf3a56b341c1"} Dec 03 00:23:00 crc kubenswrapper[4811]: I1203 00:23:00.757359 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"1794377a-5ef7-4a07-825a-e1b2c51cec9f","Type":"ContainerStarted","Data":"66a5fdce29000ac37ff59a6ca11bea26e84ec53b04ee3b00ee0c1382f45f8e85"} Dec 03 00:23:00 crc kubenswrapper[4811]: I1203 00:23:00.796612 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=3.7965927820000003 podStartE2EDuration="3.796592782s" podCreationTimestamp="2025-12-03 00:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:23:00.78777561 +0000 UTC m=+1020.929605132" watchObservedRunningTime="2025-12-03 00:23:00.796592782 +0000 UTC m=+1020.938422264" Dec 03 00:23:02 crc kubenswrapper[4811]: I1203 00:23:02.940854 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:23:02 crc kubenswrapper[4811]: I1203 00:23:02.942088 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:23:08 crc kubenswrapper[4811]: I1203 00:23:08.464102 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:23:08 crc kubenswrapper[4811]: I1203 00:23:08.464790 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="1794377a-5ef7-4a07-825a-e1b2c51cec9f" containerName="docker-build" containerID="cri-o://66a5fdce29000ac37ff59a6ca11bea26e84ec53b04ee3b00ee0c1382f45f8e85" gracePeriod=30 Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.568818 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.571711 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.573981 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.574655 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.575217 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.588068 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.634462 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/596396ce-0e21-49e8-a579-3042e31ad65f-builder-dockercfg-xrr5t-pull\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.634569 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndkzl\" (UniqueName: \"kubernetes.io/projected/596396ce-0e21-49e8-a579-3042e31ad65f-kube-api-access-ndkzl\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.634629 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/596396ce-0e21-49e8-a579-3042e31ad65f-builder-dockercfg-xrr5t-push\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.634656 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/596396ce-0e21-49e8-a579-3042e31ad65f-buildcachedir\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.634676 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-system-configs\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.634797 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.634869 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.634964 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.635014 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-buildworkdir\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.635035 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/596396ce-0e21-49e8-a579-3042e31ad65f-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.635171 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-container-storage-root\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.635213 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-container-storage-run\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.676687 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_1794377a-5ef7-4a07-825a-e1b2c51cec9f/docker-build/0.log" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.677104 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737018 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/1794377a-5ef7-4a07-825a-e1b2c51cec9f-builder-dockercfg-xrr5t-push\") pod \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737070 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-ca-bundles\") pod \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737102 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1794377a-5ef7-4a07-825a-e1b2c51cec9f-buildcachedir\") pod \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737148 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-container-storage-root\") pod \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737172 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-container-storage-run\") pod \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737213 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-proxy-ca-bundles\") pod \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737247 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/1794377a-5ef7-4a07-825a-e1b2c51cec9f-builder-dockercfg-xrr5t-pull\") pod \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737306 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-buildworkdir\") pod \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737368 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-system-configs\") pod \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737398 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwxjh\" (UniqueName: \"kubernetes.io/projected/1794377a-5ef7-4a07-825a-e1b2c51cec9f-kube-api-access-hwxjh\") pod \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737441 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-blob-cache\") pod \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737471 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1794377a-5ef7-4a07-825a-e1b2c51cec9f-node-pullsecrets\") pod \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\" (UID: \"1794377a-5ef7-4a07-825a-e1b2c51cec9f\") " Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737658 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-container-storage-run\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737701 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/596396ce-0e21-49e8-a579-3042e31ad65f-builder-dockercfg-xrr5t-pull\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737747 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndkzl\" (UniqueName: \"kubernetes.io/projected/596396ce-0e21-49e8-a579-3042e31ad65f-kube-api-access-ndkzl\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737768 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/596396ce-0e21-49e8-a579-3042e31ad65f-builder-dockercfg-xrr5t-push\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737801 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/596396ce-0e21-49e8-a579-3042e31ad65f-buildcachedir\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737873 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-system-configs\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737933 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737963 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.737991 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.738015 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-buildworkdir\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.738016 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1794377a-5ef7-4a07-825a-e1b2c51cec9f" (UID: "1794377a-5ef7-4a07-825a-e1b2c51cec9f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.738034 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/596396ce-0e21-49e8-a579-3042e31ad65f-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.738093 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/596396ce-0e21-49e8-a579-3042e31ad65f-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.738205 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-container-storage-root\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.738328 4811 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.738558 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1794377a-5ef7-4a07-825a-e1b2c51cec9f" (UID: "1794377a-5ef7-4a07-825a-e1b2c51cec9f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.738701 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1794377a-5ef7-4a07-825a-e1b2c51cec9f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1794377a-5ef7-4a07-825a-e1b2c51cec9f" (UID: "1794377a-5ef7-4a07-825a-e1b2c51cec9f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.738919 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1794377a-5ef7-4a07-825a-e1b2c51cec9f" (UID: "1794377a-5ef7-4a07-825a-e1b2c51cec9f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.738999 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-container-storage-root\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.739224 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/596396ce-0e21-49e8-a579-3042e31ad65f-buildcachedir\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.739416 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1794377a-5ef7-4a07-825a-e1b2c51cec9f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1794377a-5ef7-4a07-825a-e1b2c51cec9f" (UID: "1794377a-5ef7-4a07-825a-e1b2c51cec9f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.739430 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.739983 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.740291 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-buildworkdir\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.740370 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-system-configs\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.740560 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.741044 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-container-storage-run\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.741135 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1794377a-5ef7-4a07-825a-e1b2c51cec9f" (UID: "1794377a-5ef7-4a07-825a-e1b2c51cec9f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.741395 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1794377a-5ef7-4a07-825a-e1b2c51cec9f" (UID: "1794377a-5ef7-4a07-825a-e1b2c51cec9f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.743827 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1794377a-5ef7-4a07-825a-e1b2c51cec9f-kube-api-access-hwxjh" (OuterVolumeSpecName: "kube-api-access-hwxjh") pod "1794377a-5ef7-4a07-825a-e1b2c51cec9f" (UID: "1794377a-5ef7-4a07-825a-e1b2c51cec9f"). InnerVolumeSpecName "kube-api-access-hwxjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.744049 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1794377a-5ef7-4a07-825a-e1b2c51cec9f-builder-dockercfg-xrr5t-push" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-push") pod "1794377a-5ef7-4a07-825a-e1b2c51cec9f" (UID: "1794377a-5ef7-4a07-825a-e1b2c51cec9f"). InnerVolumeSpecName "builder-dockercfg-xrr5t-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.744220 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1794377a-5ef7-4a07-825a-e1b2c51cec9f-builder-dockercfg-xrr5t-pull" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-pull") pod "1794377a-5ef7-4a07-825a-e1b2c51cec9f" (UID: "1794377a-5ef7-4a07-825a-e1b2c51cec9f"). InnerVolumeSpecName "builder-dockercfg-xrr5t-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.751512 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/596396ce-0e21-49e8-a579-3042e31ad65f-builder-dockercfg-xrr5t-pull\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.754509 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/596396ce-0e21-49e8-a579-3042e31ad65f-builder-dockercfg-xrr5t-push\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.768787 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndkzl\" (UniqueName: \"kubernetes.io/projected/596396ce-0e21-49e8-a579-3042e31ad65f-kube-api-access-ndkzl\") pod \"sg-core-2-build\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.839787 4811 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1794377a-5ef7-4a07-825a-e1b2c51cec9f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.839854 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/1794377a-5ef7-4a07-825a-e1b2c51cec9f-builder-dockercfg-xrr5t-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.839870 4811 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1794377a-5ef7-4a07-825a-e1b2c51cec9f-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.839881 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.839893 4811 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.839905 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/1794377a-5ef7-4a07-825a-e1b2c51cec9f-builder-dockercfg-xrr5t-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.839917 4811 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.839928 4811 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.839938 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwxjh\" (UniqueName: \"kubernetes.io/projected/1794377a-5ef7-4a07-825a-e1b2c51cec9f-kube-api-access-hwxjh\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.864462 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_1794377a-5ef7-4a07-825a-e1b2c51cec9f/docker-build/0.log" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.865179 4811 generic.go:334] "Generic (PLEG): container finished" podID="1794377a-5ef7-4a07-825a-e1b2c51cec9f" containerID="66a5fdce29000ac37ff59a6ca11bea26e84ec53b04ee3b00ee0c1382f45f8e85" exitCode=1 Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.865231 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"1794377a-5ef7-4a07-825a-e1b2c51cec9f","Type":"ContainerDied","Data":"66a5fdce29000ac37ff59a6ca11bea26e84ec53b04ee3b00ee0c1382f45f8e85"} Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.865280 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"1794377a-5ef7-4a07-825a-e1b2c51cec9f","Type":"ContainerDied","Data":"edc348680ea34178a0f75825ff7fc5707f7a23a1006e39eeba96a0ad85875271"} Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.865301 4811 scope.go:117] "RemoveContainer" containerID="66a5fdce29000ac37ff59a6ca11bea26e84ec53b04ee3b00ee0c1382f45f8e85" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.865310 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.882240 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1794377a-5ef7-4a07-825a-e1b2c51cec9f" (UID: "1794377a-5ef7-4a07-825a-e1b2c51cec9f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.894243 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.909919 4811 scope.go:117] "RemoveContainer" containerID="36cca760ca21637dbae5a8f9eff71b0aeb808c2971d2260e2a5ddf3a56b341c1" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.935221 4811 scope.go:117] "RemoveContainer" containerID="66a5fdce29000ac37ff59a6ca11bea26e84ec53b04ee3b00ee0c1382f45f8e85" Dec 03 00:23:10 crc kubenswrapper[4811]: E1203 00:23:10.935815 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66a5fdce29000ac37ff59a6ca11bea26e84ec53b04ee3b00ee0c1382f45f8e85\": container with ID starting with 66a5fdce29000ac37ff59a6ca11bea26e84ec53b04ee3b00ee0c1382f45f8e85 not found: ID does not exist" containerID="66a5fdce29000ac37ff59a6ca11bea26e84ec53b04ee3b00ee0c1382f45f8e85" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.935878 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66a5fdce29000ac37ff59a6ca11bea26e84ec53b04ee3b00ee0c1382f45f8e85"} err="failed to get container status \"66a5fdce29000ac37ff59a6ca11bea26e84ec53b04ee3b00ee0c1382f45f8e85\": rpc error: code = NotFound desc = could not find container \"66a5fdce29000ac37ff59a6ca11bea26e84ec53b04ee3b00ee0c1382f45f8e85\": container with ID starting with 66a5fdce29000ac37ff59a6ca11bea26e84ec53b04ee3b00ee0c1382f45f8e85 not found: ID does not exist" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.935932 4811 scope.go:117] "RemoveContainer" containerID="36cca760ca21637dbae5a8f9eff71b0aeb808c2971d2260e2a5ddf3a56b341c1" Dec 03 00:23:10 crc kubenswrapper[4811]: E1203 00:23:10.936683 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36cca760ca21637dbae5a8f9eff71b0aeb808c2971d2260e2a5ddf3a56b341c1\": container with ID starting with 36cca760ca21637dbae5a8f9eff71b0aeb808c2971d2260e2a5ddf3a56b341c1 not found: ID does not exist" containerID="36cca760ca21637dbae5a8f9eff71b0aeb808c2971d2260e2a5ddf3a56b341c1" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.936795 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36cca760ca21637dbae5a8f9eff71b0aeb808c2971d2260e2a5ddf3a56b341c1"} err="failed to get container status \"36cca760ca21637dbae5a8f9eff71b0aeb808c2971d2260e2a5ddf3a56b341c1\": rpc error: code = NotFound desc = could not find container \"36cca760ca21637dbae5a8f9eff71b0aeb808c2971d2260e2a5ddf3a56b341c1\": container with ID starting with 36cca760ca21637dbae5a8f9eff71b0aeb808c2971d2260e2a5ddf3a56b341c1 not found: ID does not exist" Dec 03 00:23:10 crc kubenswrapper[4811]: I1203 00:23:10.941676 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:11 crc kubenswrapper[4811]: I1203 00:23:11.114240 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Dec 03 00:23:11 crc kubenswrapper[4811]: W1203 00:23:11.122179 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod596396ce_0e21_49e8_a579_3042e31ad65f.slice/crio-a32d7523994d6dcbee6ead70d5d9ce603de20aa249020448e3fe17a1d493d9c6 WatchSource:0}: Error finding container a32d7523994d6dcbee6ead70d5d9ce603de20aa249020448e3fe17a1d493d9c6: Status 404 returned error can't find the container with id a32d7523994d6dcbee6ead70d5d9ce603de20aa249020448e3fe17a1d493d9c6 Dec 03 00:23:11 crc kubenswrapper[4811]: I1203 00:23:11.877862 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"596396ce-0e21-49e8-a579-3042e31ad65f","Type":"ContainerStarted","Data":"a32d7523994d6dcbee6ead70d5d9ce603de20aa249020448e3fe17a1d493d9c6"} Dec 03 00:23:12 crc kubenswrapper[4811]: I1203 00:23:12.362077 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1794377a-5ef7-4a07-825a-e1b2c51cec9f" (UID: "1794377a-5ef7-4a07-825a-e1b2c51cec9f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:23:12 crc kubenswrapper[4811]: I1203 00:23:12.367590 4811 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1794377a-5ef7-4a07-825a-e1b2c51cec9f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:12 crc kubenswrapper[4811]: I1203 00:23:12.409437 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:23:12 crc kubenswrapper[4811]: I1203 00:23:12.417946 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:23:12 crc kubenswrapper[4811]: I1203 00:23:12.894388 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"596396ce-0e21-49e8-a579-3042e31ad65f","Type":"ContainerStarted","Data":"504d18e39d09dd062aff24f2a68e2210cd574a65aaa03bf0e5dcb07082ec06ac"} Dec 03 00:23:13 crc kubenswrapper[4811]: I1203 00:23:13.902433 4811 generic.go:334] "Generic (PLEG): container finished" podID="596396ce-0e21-49e8-a579-3042e31ad65f" containerID="504d18e39d09dd062aff24f2a68e2210cd574a65aaa03bf0e5dcb07082ec06ac" exitCode=0 Dec 03 00:23:13 crc kubenswrapper[4811]: I1203 00:23:13.902528 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"596396ce-0e21-49e8-a579-3042e31ad65f","Type":"ContainerDied","Data":"504d18e39d09dd062aff24f2a68e2210cd574a65aaa03bf0e5dcb07082ec06ac"} Dec 03 00:23:14 crc kubenswrapper[4811]: I1203 00:23:14.127442 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1794377a-5ef7-4a07-825a-e1b2c51cec9f" path="/var/lib/kubelet/pods/1794377a-5ef7-4a07-825a-e1b2c51cec9f/volumes" Dec 03 00:23:14 crc kubenswrapper[4811]: I1203 00:23:14.912191 4811 generic.go:334] "Generic (PLEG): container finished" podID="596396ce-0e21-49e8-a579-3042e31ad65f" containerID="557266fc2b803a2847128367fea1b0d91a55da53a5594e215d28b37590d80ed9" exitCode=0 Dec 03 00:23:14 crc kubenswrapper[4811]: I1203 00:23:14.912343 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"596396ce-0e21-49e8-a579-3042e31ad65f","Type":"ContainerDied","Data":"557266fc2b803a2847128367fea1b0d91a55da53a5594e215d28b37590d80ed9"} Dec 03 00:23:14 crc kubenswrapper[4811]: I1203 00:23:14.962027 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_596396ce-0e21-49e8-a579-3042e31ad65f/manage-dockerfile/0.log" Dec 03 00:23:15 crc kubenswrapper[4811]: I1203 00:23:15.929829 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"596396ce-0e21-49e8-a579-3042e31ad65f","Type":"ContainerStarted","Data":"fe8983805ee3bb51a7966228f0ebf322aaa18701a4da30e503dc00c2e7c3f2ed"} Dec 03 00:23:15 crc kubenswrapper[4811]: I1203 00:23:15.971617 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.971581462 podStartE2EDuration="5.971581462s" podCreationTimestamp="2025-12-03 00:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:23:15.967376989 +0000 UTC m=+1036.109206461" watchObservedRunningTime="2025-12-03 00:23:15.971581462 +0000 UTC m=+1036.113410954" Dec 03 00:23:32 crc kubenswrapper[4811]: I1203 00:23:32.940107 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:23:32 crc kubenswrapper[4811]: I1203 00:23:32.940758 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:23:32 crc kubenswrapper[4811]: I1203 00:23:32.940818 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:23:32 crc kubenswrapper[4811]: I1203 00:23:32.941530 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"874b8048feed7e191debfdbcf8853f72ec34ff95af49474a68f75504656f9153"} pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:23:32 crc kubenswrapper[4811]: I1203 00:23:32.941589 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" containerID="cri-o://874b8048feed7e191debfdbcf8853f72ec34ff95af49474a68f75504656f9153" gracePeriod=600 Dec 03 00:23:33 crc kubenswrapper[4811]: I1203 00:23:33.063692 4811 generic.go:334] "Generic (PLEG): container finished" podID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerID="874b8048feed7e191debfdbcf8853f72ec34ff95af49474a68f75504656f9153" exitCode=0 Dec 03 00:23:33 crc kubenswrapper[4811]: I1203 00:23:33.064062 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerDied","Data":"874b8048feed7e191debfdbcf8853f72ec34ff95af49474a68f75504656f9153"} Dec 03 00:23:33 crc kubenswrapper[4811]: I1203 00:23:33.064139 4811 scope.go:117] "RemoveContainer" containerID="e6b2aa2a2ddd7fd20474659fdb1c709c86b66b5560f41a3dec0d4ef06fe80f30" Dec 03 00:23:34 crc kubenswrapper[4811]: I1203 00:23:34.080362 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerStarted","Data":"f4d25db9c9ac1df29df1bbcee3a02169fd962097c68bac9c7311fb3f69dcdc76"} Dec 03 00:26:02 crc kubenswrapper[4811]: I1203 00:26:02.940949 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:26:02 crc kubenswrapper[4811]: I1203 00:26:02.943050 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:26:32 crc kubenswrapper[4811]: I1203 00:26:32.939996 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:26:32 crc kubenswrapper[4811]: I1203 00:26:32.940517 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:26:46 crc kubenswrapper[4811]: I1203 00:26:46.924352 4811 generic.go:334] "Generic (PLEG): container finished" podID="596396ce-0e21-49e8-a579-3042e31ad65f" containerID="fe8983805ee3bb51a7966228f0ebf322aaa18701a4da30e503dc00c2e7c3f2ed" exitCode=0 Dec 03 00:26:46 crc kubenswrapper[4811]: I1203 00:26:46.924427 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"596396ce-0e21-49e8-a579-3042e31ad65f","Type":"ContainerDied","Data":"fe8983805ee3bb51a7966228f0ebf322aaa18701a4da30e503dc00c2e7c3f2ed"} Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.246965 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.391914 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-container-storage-run\") pod \"596396ce-0e21-49e8-a579-3042e31ad65f\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.391973 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/596396ce-0e21-49e8-a579-3042e31ad65f-builder-dockercfg-xrr5t-pull\") pod \"596396ce-0e21-49e8-a579-3042e31ad65f\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.392047 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-container-storage-root\") pod \"596396ce-0e21-49e8-a579-3042e31ad65f\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.392068 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/596396ce-0e21-49e8-a579-3042e31ad65f-builder-dockercfg-xrr5t-push\") pod \"596396ce-0e21-49e8-a579-3042e31ad65f\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.392157 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-build-blob-cache\") pod \"596396ce-0e21-49e8-a579-3042e31ad65f\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.392191 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-system-configs\") pod \"596396ce-0e21-49e8-a579-3042e31ad65f\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.392216 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-proxy-ca-bundles\") pod \"596396ce-0e21-49e8-a579-3042e31ad65f\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.393181 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "596396ce-0e21-49e8-a579-3042e31ad65f" (UID: "596396ce-0e21-49e8-a579-3042e31ad65f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.393577 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndkzl\" (UniqueName: \"kubernetes.io/projected/596396ce-0e21-49e8-a579-3042e31ad65f-kube-api-access-ndkzl\") pod \"596396ce-0e21-49e8-a579-3042e31ad65f\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.393640 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-ca-bundles\") pod \"596396ce-0e21-49e8-a579-3042e31ad65f\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.393544 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "596396ce-0e21-49e8-a579-3042e31ad65f" (UID: "596396ce-0e21-49e8-a579-3042e31ad65f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.393753 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "596396ce-0e21-49e8-a579-3042e31ad65f" (UID: "596396ce-0e21-49e8-a579-3042e31ad65f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.393698 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/596396ce-0e21-49e8-a579-3042e31ad65f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "596396ce-0e21-49e8-a579-3042e31ad65f" (UID: "596396ce-0e21-49e8-a579-3042e31ad65f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.393672 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/596396ce-0e21-49e8-a579-3042e31ad65f-buildcachedir\") pod \"596396ce-0e21-49e8-a579-3042e31ad65f\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.393827 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/596396ce-0e21-49e8-a579-3042e31ad65f-node-pullsecrets\") pod \"596396ce-0e21-49e8-a579-3042e31ad65f\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.393857 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-buildworkdir\") pod \"596396ce-0e21-49e8-a579-3042e31ad65f\" (UID: \"596396ce-0e21-49e8-a579-3042e31ad65f\") " Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.393974 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/596396ce-0e21-49e8-a579-3042e31ad65f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "596396ce-0e21-49e8-a579-3042e31ad65f" (UID: "596396ce-0e21-49e8-a579-3042e31ad65f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.394623 4811 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/596396ce-0e21-49e8-a579-3042e31ad65f-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.394643 4811 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/596396ce-0e21-49e8-a579-3042e31ad65f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.394654 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.394668 4811 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.394681 4811 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.394661 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "596396ce-0e21-49e8-a579-3042e31ad65f" (UID: "596396ce-0e21-49e8-a579-3042e31ad65f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.399217 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/596396ce-0e21-49e8-a579-3042e31ad65f-kube-api-access-ndkzl" (OuterVolumeSpecName: "kube-api-access-ndkzl") pod "596396ce-0e21-49e8-a579-3042e31ad65f" (UID: "596396ce-0e21-49e8-a579-3042e31ad65f"). InnerVolumeSpecName "kube-api-access-ndkzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.400080 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/596396ce-0e21-49e8-a579-3042e31ad65f-builder-dockercfg-xrr5t-pull" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-pull") pod "596396ce-0e21-49e8-a579-3042e31ad65f" (UID: "596396ce-0e21-49e8-a579-3042e31ad65f"). InnerVolumeSpecName "builder-dockercfg-xrr5t-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.402652 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/596396ce-0e21-49e8-a579-3042e31ad65f-builder-dockercfg-xrr5t-push" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-push") pod "596396ce-0e21-49e8-a579-3042e31ad65f" (UID: "596396ce-0e21-49e8-a579-3042e31ad65f"). InnerVolumeSpecName "builder-dockercfg-xrr5t-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.407427 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "596396ce-0e21-49e8-a579-3042e31ad65f" (UID: "596396ce-0e21-49e8-a579-3042e31ad65f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.496065 4811 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/596396ce-0e21-49e8-a579-3042e31ad65f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.496110 4811 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.496125 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/596396ce-0e21-49e8-a579-3042e31ad65f-builder-dockercfg-xrr5t-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.496141 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/596396ce-0e21-49e8-a579-3042e31ad65f-builder-dockercfg-xrr5t-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.496155 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndkzl\" (UniqueName: \"kubernetes.io/projected/596396ce-0e21-49e8-a579-3042e31ad65f-kube-api-access-ndkzl\") on node \"crc\" DevicePath \"\"" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.712807 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "596396ce-0e21-49e8-a579-3042e31ad65f" (UID: "596396ce-0e21-49e8-a579-3042e31ad65f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.799790 4811 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.946905 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"596396ce-0e21-49e8-a579-3042e31ad65f","Type":"ContainerDied","Data":"a32d7523994d6dcbee6ead70d5d9ce603de20aa249020448e3fe17a1d493d9c6"} Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.947405 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a32d7523994d6dcbee6ead70d5d9ce603de20aa249020448e3fe17a1d493d9c6" Dec 03 00:26:48 crc kubenswrapper[4811]: I1203 00:26:48.947495 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 03 00:26:51 crc kubenswrapper[4811]: I1203 00:26:51.346373 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "596396ce-0e21-49e8-a579-3042e31ad65f" (UID: "596396ce-0e21-49e8-a579-3042e31ad65f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:26:51 crc kubenswrapper[4811]: I1203 00:26:51.438868 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/596396ce-0e21-49e8-a579-3042e31ad65f-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.054697 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:26:53 crc kubenswrapper[4811]: E1203 00:26:53.055922 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1794377a-5ef7-4a07-825a-e1b2c51cec9f" containerName="docker-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.055952 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1794377a-5ef7-4a07-825a-e1b2c51cec9f" containerName="docker-build" Dec 03 00:26:53 crc kubenswrapper[4811]: E1203 00:26:53.055970 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="596396ce-0e21-49e8-a579-3042e31ad65f" containerName="git-clone" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.055976 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="596396ce-0e21-49e8-a579-3042e31ad65f" containerName="git-clone" Dec 03 00:26:53 crc kubenswrapper[4811]: E1203 00:26:53.055987 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="596396ce-0e21-49e8-a579-3042e31ad65f" containerName="manage-dockerfile" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.055995 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="596396ce-0e21-49e8-a579-3042e31ad65f" containerName="manage-dockerfile" Dec 03 00:26:53 crc kubenswrapper[4811]: E1203 00:26:53.056006 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1794377a-5ef7-4a07-825a-e1b2c51cec9f" containerName="manage-dockerfile" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.056013 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1794377a-5ef7-4a07-825a-e1b2c51cec9f" containerName="manage-dockerfile" Dec 03 00:26:53 crc kubenswrapper[4811]: E1203 00:26:53.056024 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="596396ce-0e21-49e8-a579-3042e31ad65f" containerName="docker-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.056029 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="596396ce-0e21-49e8-a579-3042e31ad65f" containerName="docker-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.056160 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1794377a-5ef7-4a07-825a-e1b2c51cec9f" containerName="docker-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.056178 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="596396ce-0e21-49e8-a579-3042e31ad65f" containerName="docker-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.057310 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.063099 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.063527 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-xrr5t" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.063742 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.064656 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.072940 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.163946 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.163997 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.164050 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/943f13c7-5fd2-4d77-8db4-3f14677e371f-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.164112 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.164130 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/943f13c7-5fd2-4d77-8db4-3f14677e371f-builder-dockercfg-xrr5t-pull\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.164166 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.164301 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/943f13c7-5fd2-4d77-8db4-3f14677e371f-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.164380 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.164424 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.164510 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8sk6\" (UniqueName: \"kubernetes.io/projected/943f13c7-5fd2-4d77-8db4-3f14677e371f-kube-api-access-j8sk6\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.164547 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.164621 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/943f13c7-5fd2-4d77-8db4-3f14677e371f-builder-dockercfg-xrr5t-push\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.265751 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/943f13c7-5fd2-4d77-8db4-3f14677e371f-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.265798 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.265817 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/943f13c7-5fd2-4d77-8db4-3f14677e371f-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.265826 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/943f13c7-5fd2-4d77-8db4-3f14677e371f-builder-dockercfg-xrr5t-pull\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.265875 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.265906 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/943f13c7-5fd2-4d77-8db4-3f14677e371f-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.265929 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.265962 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.266009 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8sk6\" (UniqueName: \"kubernetes.io/projected/943f13c7-5fd2-4d77-8db4-3f14677e371f-kube-api-access-j8sk6\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.266031 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.266060 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/943f13c7-5fd2-4d77-8db4-3f14677e371f-builder-dockercfg-xrr5t-push\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.266055 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/943f13c7-5fd2-4d77-8db4-3f14677e371f-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.266088 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.266194 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.266495 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.266597 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.267052 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.267426 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.267488 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.267511 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.268600 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.274658 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/943f13c7-5fd2-4d77-8db4-3f14677e371f-builder-dockercfg-xrr5t-pull\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.276829 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/943f13c7-5fd2-4d77-8db4-3f14677e371f-builder-dockercfg-xrr5t-push\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.290520 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8sk6\" (UniqueName: \"kubernetes.io/projected/943f13c7-5fd2-4d77-8db4-3f14677e371f-kube-api-access-j8sk6\") pod \"sg-bridge-1-build\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.437396 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.728880 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:26:53 crc kubenswrapper[4811]: I1203 00:26:53.991505 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"943f13c7-5fd2-4d77-8db4-3f14677e371f","Type":"ContainerStarted","Data":"2262754a599b239d775786ca00ea802e908bf7e62f8e5139f6bc32f0190654ab"} Dec 03 00:26:55 crc kubenswrapper[4811]: I1203 00:26:55.000161 4811 generic.go:334] "Generic (PLEG): container finished" podID="943f13c7-5fd2-4d77-8db4-3f14677e371f" containerID="bce8f64abf73accb9a4dc61075ab68017177c195dbc39a2c9752a60185594bf6" exitCode=0 Dec 03 00:26:55 crc kubenswrapper[4811]: I1203 00:26:55.000303 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"943f13c7-5fd2-4d77-8db4-3f14677e371f","Type":"ContainerDied","Data":"bce8f64abf73accb9a4dc61075ab68017177c195dbc39a2c9752a60185594bf6"} Dec 03 00:26:56 crc kubenswrapper[4811]: I1203 00:26:56.010730 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"943f13c7-5fd2-4d77-8db4-3f14677e371f","Type":"ContainerStarted","Data":"a08cb32ed428d2590d3f175f30823dd731d0f85ae9f5033ef1d6786df4b0260d"} Dec 03 00:26:56 crc kubenswrapper[4811]: I1203 00:26:56.047869 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.047824221 podStartE2EDuration="3.047824221s" podCreationTimestamp="2025-12-03 00:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:26:56.042498842 +0000 UTC m=+1256.184328344" watchObservedRunningTime="2025-12-03 00:26:56.047824221 +0000 UTC m=+1256.189653733" Dec 03 00:27:02 crc kubenswrapper[4811]: I1203 00:27:02.940121 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:27:02 crc kubenswrapper[4811]: I1203 00:27:02.940742 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:27:02 crc kubenswrapper[4811]: I1203 00:27:02.940816 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:27:02 crc kubenswrapper[4811]: I1203 00:27:02.941686 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4d25db9c9ac1df29df1bbcee3a02169fd962097c68bac9c7311fb3f69dcdc76"} pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:27:02 crc kubenswrapper[4811]: I1203 00:27:02.941835 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" containerID="cri-o://f4d25db9c9ac1df29df1bbcee3a02169fd962097c68bac9c7311fb3f69dcdc76" gracePeriod=600 Dec 03 00:27:03 crc kubenswrapper[4811]: I1203 00:27:03.848346 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:27:03 crc kubenswrapper[4811]: I1203 00:27:03.848853 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="943f13c7-5fd2-4d77-8db4-3f14677e371f" containerName="docker-build" containerID="cri-o://a08cb32ed428d2590d3f175f30823dd731d0f85ae9f5033ef1d6786df4b0260d" gracePeriod=30 Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.098109 4811 generic.go:334] "Generic (PLEG): container finished" podID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerID="f4d25db9c9ac1df29df1bbcee3a02169fd962097c68bac9c7311fb3f69dcdc76" exitCode=0 Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.098848 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerDied","Data":"f4d25db9c9ac1df29df1bbcee3a02169fd962097c68bac9c7311fb3f69dcdc76"} Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.098943 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerStarted","Data":"da6ab7c89c73f34f2e196fadad96f85f0b4d6e41e72b418a78dc01f58cbadf17"} Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.098975 4811 scope.go:117] "RemoveContainer" containerID="874b8048feed7e191debfdbcf8853f72ec34ff95af49474a68f75504656f9153" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.116129 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_943f13c7-5fd2-4d77-8db4-3f14677e371f/docker-build/0.log" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.117157 4811 generic.go:334] "Generic (PLEG): container finished" podID="943f13c7-5fd2-4d77-8db4-3f14677e371f" containerID="a08cb32ed428d2590d3f175f30823dd731d0f85ae9f5033ef1d6786df4b0260d" exitCode=1 Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.141932 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"943f13c7-5fd2-4d77-8db4-3f14677e371f","Type":"ContainerDied","Data":"a08cb32ed428d2590d3f175f30823dd731d0f85ae9f5033ef1d6786df4b0260d"} Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.214075 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_943f13c7-5fd2-4d77-8db4-3f14677e371f/docker-build/0.log" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.214546 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.333950 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-blob-cache\") pod \"943f13c7-5fd2-4d77-8db4-3f14677e371f\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.334001 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/943f13c7-5fd2-4d77-8db4-3f14677e371f-node-pullsecrets\") pod \"943f13c7-5fd2-4d77-8db4-3f14677e371f\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.334029 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-container-storage-root\") pod \"943f13c7-5fd2-4d77-8db4-3f14677e371f\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.334060 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-container-storage-run\") pod \"943f13c7-5fd2-4d77-8db4-3f14677e371f\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.334101 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8sk6\" (UniqueName: \"kubernetes.io/projected/943f13c7-5fd2-4d77-8db4-3f14677e371f-kube-api-access-j8sk6\") pod \"943f13c7-5fd2-4d77-8db4-3f14677e371f\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.334104 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943f13c7-5fd2-4d77-8db4-3f14677e371f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "943f13c7-5fd2-4d77-8db4-3f14677e371f" (UID: "943f13c7-5fd2-4d77-8db4-3f14677e371f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.334143 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-system-configs\") pod \"943f13c7-5fd2-4d77-8db4-3f14677e371f\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.334168 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/943f13c7-5fd2-4d77-8db4-3f14677e371f-builder-dockercfg-xrr5t-push\") pod \"943f13c7-5fd2-4d77-8db4-3f14677e371f\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.334208 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-ca-bundles\") pod \"943f13c7-5fd2-4d77-8db4-3f14677e371f\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.334235 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/943f13c7-5fd2-4d77-8db4-3f14677e371f-buildcachedir\") pod \"943f13c7-5fd2-4d77-8db4-3f14677e371f\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.334294 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-buildworkdir\") pod \"943f13c7-5fd2-4d77-8db4-3f14677e371f\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.334325 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/943f13c7-5fd2-4d77-8db4-3f14677e371f-builder-dockercfg-xrr5t-pull\") pod \"943f13c7-5fd2-4d77-8db4-3f14677e371f\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.334401 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-proxy-ca-bundles\") pod \"943f13c7-5fd2-4d77-8db4-3f14677e371f\" (UID: \"943f13c7-5fd2-4d77-8db4-3f14677e371f\") " Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.334679 4811 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/943f13c7-5fd2-4d77-8db4-3f14677e371f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.334728 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943f13c7-5fd2-4d77-8db4-3f14677e371f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "943f13c7-5fd2-4d77-8db4-3f14677e371f" (UID: "943f13c7-5fd2-4d77-8db4-3f14677e371f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.335055 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "943f13c7-5fd2-4d77-8db4-3f14677e371f" (UID: "943f13c7-5fd2-4d77-8db4-3f14677e371f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.337066 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "943f13c7-5fd2-4d77-8db4-3f14677e371f" (UID: "943f13c7-5fd2-4d77-8db4-3f14677e371f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.337238 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "943f13c7-5fd2-4d77-8db4-3f14677e371f" (UID: "943f13c7-5fd2-4d77-8db4-3f14677e371f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.337367 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "943f13c7-5fd2-4d77-8db4-3f14677e371f" (UID: "943f13c7-5fd2-4d77-8db4-3f14677e371f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.337372 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "943f13c7-5fd2-4d77-8db4-3f14677e371f" (UID: "943f13c7-5fd2-4d77-8db4-3f14677e371f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.342851 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/943f13c7-5fd2-4d77-8db4-3f14677e371f-builder-dockercfg-xrr5t-push" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-push") pod "943f13c7-5fd2-4d77-8db4-3f14677e371f" (UID: "943f13c7-5fd2-4d77-8db4-3f14677e371f"). InnerVolumeSpecName "builder-dockercfg-xrr5t-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.345856 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/943f13c7-5fd2-4d77-8db4-3f14677e371f-builder-dockercfg-xrr5t-pull" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-pull") pod "943f13c7-5fd2-4d77-8db4-3f14677e371f" (UID: "943f13c7-5fd2-4d77-8db4-3f14677e371f"). InnerVolumeSpecName "builder-dockercfg-xrr5t-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.350396 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/943f13c7-5fd2-4d77-8db4-3f14677e371f-kube-api-access-j8sk6" (OuterVolumeSpecName: "kube-api-access-j8sk6") pod "943f13c7-5fd2-4d77-8db4-3f14677e371f" (UID: "943f13c7-5fd2-4d77-8db4-3f14677e371f"). InnerVolumeSpecName "kube-api-access-j8sk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.420700 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "943f13c7-5fd2-4d77-8db4-3f14677e371f" (UID: "943f13c7-5fd2-4d77-8db4-3f14677e371f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.435784 4811 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.435827 4811 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/943f13c7-5fd2-4d77-8db4-3f14677e371f-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.435841 4811 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.435854 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/943f13c7-5fd2-4d77-8db4-3f14677e371f-builder-dockercfg-xrr5t-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.435867 4811 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.435881 4811 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.435893 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.435904 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8sk6\" (UniqueName: \"kubernetes.io/projected/943f13c7-5fd2-4d77-8db4-3f14677e371f-kube-api-access-j8sk6\") on node \"crc\" DevicePath \"\"" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.435915 4811 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/943f13c7-5fd2-4d77-8db4-3f14677e371f-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.435928 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/943f13c7-5fd2-4d77-8db4-3f14677e371f-builder-dockercfg-xrr5t-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.762686 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "943f13c7-5fd2-4d77-8db4-3f14677e371f" (UID: "943f13c7-5fd2-4d77-8db4-3f14677e371f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:27:04 crc kubenswrapper[4811]: I1203 00:27:04.840848 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/943f13c7-5fd2-4d77-8db4-3f14677e371f-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.132506 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_943f13c7-5fd2-4d77-8db4-3f14677e371f/docker-build/0.log" Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.133531 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"943f13c7-5fd2-4d77-8db4-3f14677e371f","Type":"ContainerDied","Data":"2262754a599b239d775786ca00ea802e908bf7e62f8e5139f6bc32f0190654ab"} Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.133614 4811 scope.go:117] "RemoveContainer" containerID="a08cb32ed428d2590d3f175f30823dd731d0f85ae9f5033ef1d6786df4b0260d" Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.133826 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.184601 4811 scope.go:117] "RemoveContainer" containerID="bce8f64abf73accb9a4dc61075ab68017177c195dbc39a2c9752a60185594bf6" Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.195599 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.201721 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.879468 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Dec 03 00:27:05 crc kubenswrapper[4811]: E1203 00:27:05.879932 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943f13c7-5fd2-4d77-8db4-3f14677e371f" containerName="manage-dockerfile" Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.880069 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="943f13c7-5fd2-4d77-8db4-3f14677e371f" containerName="manage-dockerfile" Dec 03 00:27:05 crc kubenswrapper[4811]: E1203 00:27:05.880164 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943f13c7-5fd2-4d77-8db4-3f14677e371f" containerName="docker-build" Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.880239 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="943f13c7-5fd2-4d77-8db4-3f14677e371f" containerName="docker-build" Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.880462 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="943f13c7-5fd2-4d77-8db4-3f14677e371f" containerName="docker-build" Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.881479 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.884003 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.885514 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.885798 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.889129 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-xrr5t" Dec 03 00:27:05 crc kubenswrapper[4811]: I1203 00:27:05.923329 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.057717 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/455a9c2c-c88a-472f-8466-7e9c4725bef5-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.057761 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/455a9c2c-c88a-472f-8466-7e9c4725bef5-builder-dockercfg-xrr5t-push\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.057780 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/455a9c2c-c88a-472f-8466-7e9c4725bef5-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.057799 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.057843 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.057868 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.057884 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h482\" (UniqueName: \"kubernetes.io/projected/455a9c2c-c88a-472f-8466-7e9c4725bef5-kube-api-access-2h482\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.057899 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.057932 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.057961 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.057976 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/455a9c2c-c88a-472f-8466-7e9c4725bef5-builder-dockercfg-xrr5t-pull\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.057991 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.127052 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="943f13c7-5fd2-4d77-8db4-3f14677e371f" path="/var/lib/kubelet/pods/943f13c7-5fd2-4d77-8db4-3f14677e371f/volumes" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.159450 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/455a9c2c-c88a-472f-8466-7e9c4725bef5-builder-dockercfg-xrr5t-push\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.159521 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/455a9c2c-c88a-472f-8466-7e9c4725bef5-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.159578 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.159620 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.159687 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.159732 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h482\" (UniqueName: \"kubernetes.io/projected/455a9c2c-c88a-472f-8466-7e9c4725bef5-kube-api-access-2h482\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.159772 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.159849 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.159925 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.159963 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/455a9c2c-c88a-472f-8466-7e9c4725bef5-builder-dockercfg-xrr5t-pull\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.160007 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.160061 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/455a9c2c-c88a-472f-8466-7e9c4725bef5-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.160195 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/455a9c2c-c88a-472f-8466-7e9c4725bef5-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.160741 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.160886 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/455a9c2c-c88a-472f-8466-7e9c4725bef5-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.160955 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.161117 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.161133 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.161408 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.161890 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.162173 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.168693 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/455a9c2c-c88a-472f-8466-7e9c4725bef5-builder-dockercfg-xrr5t-pull\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.168988 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/455a9c2c-c88a-472f-8466-7e9c4725bef5-builder-dockercfg-xrr5t-push\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.180844 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h482\" (UniqueName: \"kubernetes.io/projected/455a9c2c-c88a-472f-8466-7e9c4725bef5-kube-api-access-2h482\") pod \"sg-bridge-2-build\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.198491 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 03 00:27:06 crc kubenswrapper[4811]: I1203 00:27:06.688169 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Dec 03 00:27:06 crc kubenswrapper[4811]: W1203 00:27:06.698471 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod455a9c2c_c88a_472f_8466_7e9c4725bef5.slice/crio-fab9da30262c6d049fbd61ff379f505111bec8f3df4698f606c1f14f4cb25008 WatchSource:0}: Error finding container fab9da30262c6d049fbd61ff379f505111bec8f3df4698f606c1f14f4cb25008: Status 404 returned error can't find the container with id fab9da30262c6d049fbd61ff379f505111bec8f3df4698f606c1f14f4cb25008 Dec 03 00:27:07 crc kubenswrapper[4811]: I1203 00:27:07.157233 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"455a9c2c-c88a-472f-8466-7e9c4725bef5","Type":"ContainerStarted","Data":"4652adb504830d207087201e0281f60584d02c3e76c61f5d2a13e25080d366f1"} Dec 03 00:27:07 crc kubenswrapper[4811]: I1203 00:27:07.157582 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"455a9c2c-c88a-472f-8466-7e9c4725bef5","Type":"ContainerStarted","Data":"fab9da30262c6d049fbd61ff379f505111bec8f3df4698f606c1f14f4cb25008"} Dec 03 00:27:08 crc kubenswrapper[4811]: I1203 00:27:08.165666 4811 generic.go:334] "Generic (PLEG): container finished" podID="455a9c2c-c88a-472f-8466-7e9c4725bef5" containerID="4652adb504830d207087201e0281f60584d02c3e76c61f5d2a13e25080d366f1" exitCode=0 Dec 03 00:27:08 crc kubenswrapper[4811]: I1203 00:27:08.165738 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"455a9c2c-c88a-472f-8466-7e9c4725bef5","Type":"ContainerDied","Data":"4652adb504830d207087201e0281f60584d02c3e76c61f5d2a13e25080d366f1"} Dec 03 00:27:09 crc kubenswrapper[4811]: I1203 00:27:09.178120 4811 generic.go:334] "Generic (PLEG): container finished" podID="455a9c2c-c88a-472f-8466-7e9c4725bef5" containerID="a02d9dc0f21f409f4868a5dad6e014b9b64500caa940faa2c7a01ba0944851f4" exitCode=0 Dec 03 00:27:09 crc kubenswrapper[4811]: I1203 00:27:09.178218 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"455a9c2c-c88a-472f-8466-7e9c4725bef5","Type":"ContainerDied","Data":"a02d9dc0f21f409f4868a5dad6e014b9b64500caa940faa2c7a01ba0944851f4"} Dec 03 00:27:09 crc kubenswrapper[4811]: I1203 00:27:09.221680 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_455a9c2c-c88a-472f-8466-7e9c4725bef5/manage-dockerfile/0.log" Dec 03 00:27:10 crc kubenswrapper[4811]: I1203 00:27:10.188486 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"455a9c2c-c88a-472f-8466-7e9c4725bef5","Type":"ContainerStarted","Data":"353fe5397c2cdda4dda3e8cb042faeb6e963b6b5d51629d45ae0c8f3b81206fa"} Dec 03 00:27:10 crc kubenswrapper[4811]: I1203 00:27:10.221506 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.221485946 podStartE2EDuration="5.221485946s" podCreationTimestamp="2025-12-03 00:27:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:27:10.21800689 +0000 UTC m=+1270.359836392" watchObservedRunningTime="2025-12-03 00:27:10.221485946 +0000 UTC m=+1270.363315428" Dec 03 00:28:00 crc kubenswrapper[4811]: I1203 00:28:00.523816 4811 generic.go:334] "Generic (PLEG): container finished" podID="455a9c2c-c88a-472f-8466-7e9c4725bef5" containerID="353fe5397c2cdda4dda3e8cb042faeb6e963b6b5d51629d45ae0c8f3b81206fa" exitCode=0 Dec 03 00:28:00 crc kubenswrapper[4811]: I1203 00:28:00.523883 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"455a9c2c-c88a-472f-8466-7e9c4725bef5","Type":"ContainerDied","Data":"353fe5397c2cdda4dda3e8cb042faeb6e963b6b5d51629d45ae0c8f3b81206fa"} Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.855955 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.976676 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-proxy-ca-bundles\") pod \"455a9c2c-c88a-472f-8466-7e9c4725bef5\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.976732 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-container-storage-root\") pod \"455a9c2c-c88a-472f-8466-7e9c4725bef5\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.976791 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/455a9c2c-c88a-472f-8466-7e9c4725bef5-builder-dockercfg-xrr5t-push\") pod \"455a9c2c-c88a-472f-8466-7e9c4725bef5\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.976827 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-container-storage-run\") pod \"455a9c2c-c88a-472f-8466-7e9c4725bef5\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.976856 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-blob-cache\") pod \"455a9c2c-c88a-472f-8466-7e9c4725bef5\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.976909 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-buildworkdir\") pod \"455a9c2c-c88a-472f-8466-7e9c4725bef5\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.976934 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-ca-bundles\") pod \"455a9c2c-c88a-472f-8466-7e9c4725bef5\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.976983 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-system-configs\") pod \"455a9c2c-c88a-472f-8466-7e9c4725bef5\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.977003 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h482\" (UniqueName: \"kubernetes.io/projected/455a9c2c-c88a-472f-8466-7e9c4725bef5-kube-api-access-2h482\") pod \"455a9c2c-c88a-472f-8466-7e9c4725bef5\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.977022 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/455a9c2c-c88a-472f-8466-7e9c4725bef5-node-pullsecrets\") pod \"455a9c2c-c88a-472f-8466-7e9c4725bef5\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.977051 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/455a9c2c-c88a-472f-8466-7e9c4725bef5-buildcachedir\") pod \"455a9c2c-c88a-472f-8466-7e9c4725bef5\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.977067 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/455a9c2c-c88a-472f-8466-7e9c4725bef5-builder-dockercfg-xrr5t-pull\") pod \"455a9c2c-c88a-472f-8466-7e9c4725bef5\" (UID: \"455a9c2c-c88a-472f-8466-7e9c4725bef5\") " Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.977224 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/455a9c2c-c88a-472f-8466-7e9c4725bef5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "455a9c2c-c88a-472f-8466-7e9c4725bef5" (UID: "455a9c2c-c88a-472f-8466-7e9c4725bef5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.977293 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/455a9c2c-c88a-472f-8466-7e9c4725bef5-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "455a9c2c-c88a-472f-8466-7e9c4725bef5" (UID: "455a9c2c-c88a-472f-8466-7e9c4725bef5"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.977586 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "455a9c2c-c88a-472f-8466-7e9c4725bef5" (UID: "455a9c2c-c88a-472f-8466-7e9c4725bef5"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.977660 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "455a9c2c-c88a-472f-8466-7e9c4725bef5" (UID: "455a9c2c-c88a-472f-8466-7e9c4725bef5"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.977941 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "455a9c2c-c88a-472f-8466-7e9c4725bef5" (UID: "455a9c2c-c88a-472f-8466-7e9c4725bef5"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.978864 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "455a9c2c-c88a-472f-8466-7e9c4725bef5" (UID: "455a9c2c-c88a-472f-8466-7e9c4725bef5"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.979592 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "455a9c2c-c88a-472f-8466-7e9c4725bef5" (UID: "455a9c2c-c88a-472f-8466-7e9c4725bef5"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.982065 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/455a9c2c-c88a-472f-8466-7e9c4725bef5-kube-api-access-2h482" (OuterVolumeSpecName: "kube-api-access-2h482") pod "455a9c2c-c88a-472f-8466-7e9c4725bef5" (UID: "455a9c2c-c88a-472f-8466-7e9c4725bef5"). InnerVolumeSpecName "kube-api-access-2h482". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.982308 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455a9c2c-c88a-472f-8466-7e9c4725bef5-builder-dockercfg-xrr5t-push" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-push") pod "455a9c2c-c88a-472f-8466-7e9c4725bef5" (UID: "455a9c2c-c88a-472f-8466-7e9c4725bef5"). InnerVolumeSpecName "builder-dockercfg-xrr5t-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:28:01 crc kubenswrapper[4811]: I1203 00:28:01.983372 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455a9c2c-c88a-472f-8466-7e9c4725bef5-builder-dockercfg-xrr5t-pull" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-pull") pod "455a9c2c-c88a-472f-8466-7e9c4725bef5" (UID: "455a9c2c-c88a-472f-8466-7e9c4725bef5"). InnerVolumeSpecName "builder-dockercfg-xrr5t-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.079466 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.079498 4811 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.079534 4811 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.079542 4811 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.079553 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h482\" (UniqueName: \"kubernetes.io/projected/455a9c2c-c88a-472f-8466-7e9c4725bef5-kube-api-access-2h482\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.079560 4811 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/455a9c2c-c88a-472f-8466-7e9c4725bef5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.079568 4811 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/455a9c2c-c88a-472f-8466-7e9c4725bef5-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.079576 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/455a9c2c-c88a-472f-8466-7e9c4725bef5-builder-dockercfg-xrr5t-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.079584 4811 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.079593 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/455a9c2c-c88a-472f-8466-7e9c4725bef5-builder-dockercfg-xrr5t-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.089962 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "455a9c2c-c88a-472f-8466-7e9c4725bef5" (UID: "455a9c2c-c88a-472f-8466-7e9c4725bef5"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.181450 4811 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.539974 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"455a9c2c-c88a-472f-8466-7e9c4725bef5","Type":"ContainerDied","Data":"fab9da30262c6d049fbd61ff379f505111bec8f3df4698f606c1f14f4cb25008"} Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.540016 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fab9da30262c6d049fbd61ff379f505111bec8f3df4698f606c1f14f4cb25008" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.540052 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.679241 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "455a9c2c-c88a-472f-8466-7e9c4725bef5" (UID: "455a9c2c-c88a-472f-8466-7e9c4725bef5"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:28:02 crc kubenswrapper[4811]: I1203 00:28:02.688757 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/455a9c2c-c88a-472f-8466-7e9c4725bef5-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:06 crc kubenswrapper[4811]: I1203 00:28:06.985508 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:28:06 crc kubenswrapper[4811]: E1203 00:28:06.986097 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455a9c2c-c88a-472f-8466-7e9c4725bef5" containerName="manage-dockerfile" Dec 03 00:28:06 crc kubenswrapper[4811]: I1203 00:28:06.986114 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="455a9c2c-c88a-472f-8466-7e9c4725bef5" containerName="manage-dockerfile" Dec 03 00:28:06 crc kubenswrapper[4811]: E1203 00:28:06.986131 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455a9c2c-c88a-472f-8466-7e9c4725bef5" containerName="git-clone" Dec 03 00:28:06 crc kubenswrapper[4811]: I1203 00:28:06.986139 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="455a9c2c-c88a-472f-8466-7e9c4725bef5" containerName="git-clone" Dec 03 00:28:06 crc kubenswrapper[4811]: E1203 00:28:06.986149 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455a9c2c-c88a-472f-8466-7e9c4725bef5" containerName="docker-build" Dec 03 00:28:06 crc kubenswrapper[4811]: I1203 00:28:06.986157 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="455a9c2c-c88a-472f-8466-7e9c4725bef5" containerName="docker-build" Dec 03 00:28:06 crc kubenswrapper[4811]: I1203 00:28:06.986303 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="455a9c2c-c88a-472f-8466-7e9c4725bef5" containerName="docker-build" Dec 03 00:28:06 crc kubenswrapper[4811]: I1203 00:28:06.987041 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:06 crc kubenswrapper[4811]: I1203 00:28:06.989884 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Dec 03 00:28:06 crc kubenswrapper[4811]: I1203 00:28:06.989940 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-xrr5t" Dec 03 00:28:06 crc kubenswrapper[4811]: I1203 00:28:06.990093 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Dec 03 00:28:06 crc kubenswrapper[4811]: I1203 00:28:06.990162 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.010537 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.154676 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.154846 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.154912 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.154934 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.154954 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.155080 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.155125 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm5wv\" (UniqueName: \"kubernetes.io/projected/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-kube-api-access-gm5wv\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.155154 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-builder-dockercfg-xrr5t-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.155414 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.155551 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.155672 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-builder-dockercfg-xrr5t-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.155797 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.256595 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.256902 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.257002 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.257086 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.257161 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.257229 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.257382 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.257462 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm5wv\" (UniqueName: \"kubernetes.io/projected/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-kube-api-access-gm5wv\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.257540 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-builder-dockercfg-xrr5t-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.257581 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.257628 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.257838 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.257904 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.257983 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.257924 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-builder-dockercfg-xrr5t-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.258308 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.258605 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.258614 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.257907 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.259612 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.259714 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.264328 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-builder-dockercfg-xrr5t-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.272808 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-builder-dockercfg-xrr5t-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.289143 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm5wv\" (UniqueName: \"kubernetes.io/projected/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-kube-api-access-gm5wv\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.306469 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:07 crc kubenswrapper[4811]: I1203 00:28:07.565798 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:28:08 crc kubenswrapper[4811]: I1203 00:28:08.581459 4811 generic.go:334] "Generic (PLEG): container finished" podID="77cd54bd-a2a6-4405-a21b-b37ef86f64ac" containerID="01d65df06c5309a430baf2277ca746d9e198ae639eac7e4a73e67a19d521cf13" exitCode=0 Dec 03 00:28:08 crc kubenswrapper[4811]: I1203 00:28:08.581530 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"77cd54bd-a2a6-4405-a21b-b37ef86f64ac","Type":"ContainerDied","Data":"01d65df06c5309a430baf2277ca746d9e198ae639eac7e4a73e67a19d521cf13"} Dec 03 00:28:08 crc kubenswrapper[4811]: I1203 00:28:08.581759 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"77cd54bd-a2a6-4405-a21b-b37ef86f64ac","Type":"ContainerStarted","Data":"adb0405649afbb74b0a65757239f09a2178f72009131ae1f3ee234c8cad35994"} Dec 03 00:28:09 crc kubenswrapper[4811]: I1203 00:28:09.599072 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"77cd54bd-a2a6-4405-a21b-b37ef86f64ac","Type":"ContainerStarted","Data":"924ac1cae13a83e332331f156b1c6f395f28716e26654ce3a808a5ea3fca908e"} Dec 03 00:28:09 crc kubenswrapper[4811]: I1203 00:28:09.644632 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=3.644603037 podStartE2EDuration="3.644603037s" podCreationTimestamp="2025-12-03 00:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:28:09.636990292 +0000 UTC m=+1329.778819814" watchObservedRunningTime="2025-12-03 00:28:09.644603037 +0000 UTC m=+1329.786432539" Dec 03 00:28:17 crc kubenswrapper[4811]: I1203 00:28:17.460428 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:28:17 crc kubenswrapper[4811]: I1203 00:28:17.461281 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="77cd54bd-a2a6-4405-a21b-b37ef86f64ac" containerName="docker-build" containerID="cri-o://924ac1cae13a83e332331f156b1c6f395f28716e26654ce3a808a5ea3fca908e" gracePeriod=30 Dec 03 00:28:17 crc kubenswrapper[4811]: I1203 00:28:17.659146 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_77cd54bd-a2a6-4405-a21b-b37ef86f64ac/docker-build/0.log" Dec 03 00:28:17 crc kubenswrapper[4811]: I1203 00:28:17.660146 4811 generic.go:334] "Generic (PLEG): container finished" podID="77cd54bd-a2a6-4405-a21b-b37ef86f64ac" containerID="924ac1cae13a83e332331f156b1c6f395f28716e26654ce3a808a5ea3fca908e" exitCode=1 Dec 03 00:28:17 crc kubenswrapper[4811]: I1203 00:28:17.660213 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"77cd54bd-a2a6-4405-a21b-b37ef86f64ac","Type":"ContainerDied","Data":"924ac1cae13a83e332331f156b1c6f395f28716e26654ce3a808a5ea3fca908e"} Dec 03 00:28:17 crc kubenswrapper[4811]: I1203 00:28:17.908147 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_77cd54bd-a2a6-4405-a21b-b37ef86f64ac/docker-build/0.log" Dec 03 00:28:17 crc kubenswrapper[4811]: I1203 00:28:17.908546 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.005471 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-system-configs\") pod \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.005542 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-builder-dockercfg-xrr5t-pull\") pod \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.005582 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-proxy-ca-bundles\") pod \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.005642 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-container-storage-root\") pod \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.005693 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-builder-dockercfg-xrr5t-push\") pod \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.005998 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-buildworkdir\") pod \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.006092 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-ca-bundles\") pod \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.006130 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm5wv\" (UniqueName: \"kubernetes.io/projected/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-kube-api-access-gm5wv\") pod \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.006203 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-blob-cache\") pod \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.006779 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-buildcachedir\") pod \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.006811 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "77cd54bd-a2a6-4405-a21b-b37ef86f64ac" (UID: "77cd54bd-a2a6-4405-a21b-b37ef86f64ac"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.006843 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-container-storage-run\") pod \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.006896 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "77cd54bd-a2a6-4405-a21b-b37ef86f64ac" (UID: "77cd54bd-a2a6-4405-a21b-b37ef86f64ac"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.006920 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-node-pullsecrets\") pod \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\" (UID: \"77cd54bd-a2a6-4405-a21b-b37ef86f64ac\") " Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.007669 4811 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.007676 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "77cd54bd-a2a6-4405-a21b-b37ef86f64ac" (UID: "77cd54bd-a2a6-4405-a21b-b37ef86f64ac"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.007741 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "77cd54bd-a2a6-4405-a21b-b37ef86f64ac" (UID: "77cd54bd-a2a6-4405-a21b-b37ef86f64ac"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.007706 4811 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.008255 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "77cd54bd-a2a6-4405-a21b-b37ef86f64ac" (UID: "77cd54bd-a2a6-4405-a21b-b37ef86f64ac"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.008541 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "77cd54bd-a2a6-4405-a21b-b37ef86f64ac" (UID: "77cd54bd-a2a6-4405-a21b-b37ef86f64ac"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.009305 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "77cd54bd-a2a6-4405-a21b-b37ef86f64ac" (UID: "77cd54bd-a2a6-4405-a21b-b37ef86f64ac"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.011949 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-kube-api-access-gm5wv" (OuterVolumeSpecName: "kube-api-access-gm5wv") pod "77cd54bd-a2a6-4405-a21b-b37ef86f64ac" (UID: "77cd54bd-a2a6-4405-a21b-b37ef86f64ac"). InnerVolumeSpecName "kube-api-access-gm5wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.012559 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-builder-dockercfg-xrr5t-pull" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-pull") pod "77cd54bd-a2a6-4405-a21b-b37ef86f64ac" (UID: "77cd54bd-a2a6-4405-a21b-b37ef86f64ac"). InnerVolumeSpecName "builder-dockercfg-xrr5t-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.024491 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-builder-dockercfg-xrr5t-push" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-push") pod "77cd54bd-a2a6-4405-a21b-b37ef86f64ac" (UID: "77cd54bd-a2a6-4405-a21b-b37ef86f64ac"). InnerVolumeSpecName "builder-dockercfg-xrr5t-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.058935 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "77cd54bd-a2a6-4405-a21b-b37ef86f64ac" (UID: "77cd54bd-a2a6-4405-a21b-b37ef86f64ac"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.110113 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.110170 4811 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.110186 4811 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.110200 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-builder-dockercfg-xrr5t-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.110216 4811 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.110230 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-builder-dockercfg-xrr5t-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.110245 4811 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.110277 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm5wv\" (UniqueName: \"kubernetes.io/projected/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-kube-api-access-gm5wv\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.110291 4811 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.350576 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "77cd54bd-a2a6-4405-a21b-b37ef86f64ac" (UID: "77cd54bd-a2a6-4405-a21b-b37ef86f64ac"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.414305 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77cd54bd-a2a6-4405-a21b-b37ef86f64ac-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.667868 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_77cd54bd-a2a6-4405-a21b-b37ef86f64ac/docker-build/0.log" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.668253 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"77cd54bd-a2a6-4405-a21b-b37ef86f64ac","Type":"ContainerDied","Data":"adb0405649afbb74b0a65757239f09a2178f72009131ae1f3ee234c8cad35994"} Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.668321 4811 scope.go:117] "RemoveContainer" containerID="924ac1cae13a83e332331f156b1c6f395f28716e26654ce3a808a5ea3fca908e" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.668485 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.698911 4811 scope.go:117] "RemoveContainer" containerID="01d65df06c5309a430baf2277ca746d9e198ae639eac7e4a73e67a19d521cf13" Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.716635 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:28:18 crc kubenswrapper[4811]: I1203 00:28:18.722436 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.177929 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Dec 03 00:28:19 crc kubenswrapper[4811]: E1203 00:28:19.178684 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77cd54bd-a2a6-4405-a21b-b37ef86f64ac" containerName="docker-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.178820 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="77cd54bd-a2a6-4405-a21b-b37ef86f64ac" containerName="docker-build" Dec 03 00:28:19 crc kubenswrapper[4811]: E1203 00:28:19.178947 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77cd54bd-a2a6-4405-a21b-b37ef86f64ac" containerName="manage-dockerfile" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.179053 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="77cd54bd-a2a6-4405-a21b-b37ef86f64ac" containerName="manage-dockerfile" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.179339 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="77cd54bd-a2a6-4405-a21b-b37ef86f64ac" containerName="docker-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.180720 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.184351 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.184374 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.185004 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-xrr5t" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.185040 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.269028 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.326371 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-builder-dockercfg-xrr5t-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.326444 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.326552 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.326588 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.326620 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.326665 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc7w2\" (UniqueName: \"kubernetes.io/projected/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-kube-api-access-mc7w2\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.326741 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-builder-dockercfg-xrr5t-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.326776 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.326816 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.326989 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.327060 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.327156 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.429224 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.429391 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-builder-dockercfg-xrr5t-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.429440 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.429513 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.429558 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.429606 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.429656 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc7w2\" (UniqueName: \"kubernetes.io/projected/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-kube-api-access-mc7w2\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.429712 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.429736 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-builder-dockercfg-xrr5t-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.429798 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.429842 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.429867 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.429945 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.429977 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.430221 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.430453 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.430654 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.430775 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.431017 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.431038 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.433889 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.437898 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-builder-dockercfg-xrr5t-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.438612 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-builder-dockercfg-xrr5t-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.456870 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc7w2\" (UniqueName: \"kubernetes.io/projected/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-kube-api-access-mc7w2\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.503800 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:28:19 crc kubenswrapper[4811]: I1203 00:28:19.963341 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Dec 03 00:28:20 crc kubenswrapper[4811]: I1203 00:28:20.124424 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77cd54bd-a2a6-4405-a21b-b37ef86f64ac" path="/var/lib/kubelet/pods/77cd54bd-a2a6-4405-a21b-b37ef86f64ac/volumes" Dec 03 00:28:20 crc kubenswrapper[4811]: I1203 00:28:20.688981 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5","Type":"ContainerStarted","Data":"ad31bb3c2337abe79a59b2adc46090d48b0bd1b5c12bcc1d7664630161227a20"} Dec 03 00:28:20 crc kubenswrapper[4811]: I1203 00:28:20.689049 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5","Type":"ContainerStarted","Data":"bfc642478ec41847e34b4c673f4b973eede1fa9f2f0220e7b1419b9c7ff7a3cf"} Dec 03 00:28:21 crc kubenswrapper[4811]: I1203 00:28:21.699379 4811 generic.go:334] "Generic (PLEG): container finished" podID="8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" containerID="ad31bb3c2337abe79a59b2adc46090d48b0bd1b5c12bcc1d7664630161227a20" exitCode=0 Dec 03 00:28:21 crc kubenswrapper[4811]: I1203 00:28:21.699444 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5","Type":"ContainerDied","Data":"ad31bb3c2337abe79a59b2adc46090d48b0bd1b5c12bcc1d7664630161227a20"} Dec 03 00:28:22 crc kubenswrapper[4811]: I1203 00:28:22.708995 4811 generic.go:334] "Generic (PLEG): container finished" podID="8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" containerID="7b15910fa4a32235b7a4eae159a42bd4dc598a4b29dfaabae0483e7ed1bad88b" exitCode=0 Dec 03 00:28:22 crc kubenswrapper[4811]: I1203 00:28:22.709101 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5","Type":"ContainerDied","Data":"7b15910fa4a32235b7a4eae159a42bd4dc598a4b29dfaabae0483e7ed1bad88b"} Dec 03 00:28:22 crc kubenswrapper[4811]: I1203 00:28:22.751034 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5/manage-dockerfile/0.log" Dec 03 00:28:23 crc kubenswrapper[4811]: I1203 00:28:23.725797 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5","Type":"ContainerStarted","Data":"f1b194ea90105e96c05236ab1172a8e3bf3105cabf375ccadfd93c385f388348"} Dec 03 00:28:23 crc kubenswrapper[4811]: I1203 00:28:23.784507 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=4.784477699 podStartE2EDuration="4.784477699s" podCreationTimestamp="2025-12-03 00:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:28:23.775570883 +0000 UTC m=+1343.917400355" watchObservedRunningTime="2025-12-03 00:28:23.784477699 +0000 UTC m=+1343.926307201" Dec 03 00:29:25 crc kubenswrapper[4811]: I1203 00:29:25.221981 4811 generic.go:334] "Generic (PLEG): container finished" podID="8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" containerID="f1b194ea90105e96c05236ab1172a8e3bf3105cabf375ccadfd93c385f388348" exitCode=0 Dec 03 00:29:25 crc kubenswrapper[4811]: I1203 00:29:25.222234 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5","Type":"ContainerDied","Data":"f1b194ea90105e96c05236ab1172a8e3bf3105cabf375ccadfd93c385f388348"} Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.501699 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.551644 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-proxy-ca-bundles\") pod \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.551707 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-container-storage-run\") pod \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.551734 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-container-storage-root\") pod \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.551803 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-builder-dockercfg-xrr5t-pull\") pod \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.551823 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-node-pullsecrets\") pod \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.551852 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc7w2\" (UniqueName: \"kubernetes.io/projected/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-kube-api-access-mc7w2\") pod \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.551870 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-buildcachedir\") pod \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.551889 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-buildworkdir\") pod \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.551915 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-system-configs\") pod \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.551936 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-builder-dockercfg-xrr5t-push\") pod \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.551960 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-ca-bundles\") pod \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.551981 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-blob-cache\") pod \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\" (UID: \"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5\") " Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.552359 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" (UID: "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.552422 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" (UID: "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.552661 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" (UID: "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.552917 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" (UID: "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.553314 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" (UID: "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.555749 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" (UID: "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.556612 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" (UID: "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.563395 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-builder-dockercfg-xrr5t-pull" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-pull") pod "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" (UID: "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5"). InnerVolumeSpecName "builder-dockercfg-xrr5t-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.563419 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-kube-api-access-mc7w2" (OuterVolumeSpecName: "kube-api-access-mc7w2") pod "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" (UID: "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5"). InnerVolumeSpecName "kube-api-access-mc7w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.569441 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-builder-dockercfg-xrr5t-push" (OuterVolumeSpecName: "builder-dockercfg-xrr5t-push") pod "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" (UID: "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5"). InnerVolumeSpecName "builder-dockercfg-xrr5t-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.634589 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" (UID: "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.653452 4811 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.653497 4811 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.653513 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-push\" (UniqueName: \"kubernetes.io/secret/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-builder-dockercfg-xrr5t-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.653527 4811 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.653540 4811 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.653553 4811 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.653564 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.653575 4811 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-xrr5t-pull\" (UniqueName: \"kubernetes.io/secret/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-builder-dockercfg-xrr5t-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.653586 4811 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.653596 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc7w2\" (UniqueName: \"kubernetes.io/projected/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-kube-api-access-mc7w2\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:26 crc kubenswrapper[4811]: I1203 00:29:26.653606 4811 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:27 crc kubenswrapper[4811]: I1203 00:29:27.242932 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5","Type":"ContainerDied","Data":"bfc642478ec41847e34b4c673f4b973eede1fa9f2f0220e7b1419b9c7ff7a3cf"} Dec 03 00:29:27 crc kubenswrapper[4811]: I1203 00:29:27.242990 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfc642478ec41847e34b4c673f4b973eede1fa9f2f0220e7b1419b9c7ff7a3cf" Dec 03 00:29:27 crc kubenswrapper[4811]: I1203 00:29:27.243087 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:29:27 crc kubenswrapper[4811]: I1203 00:29:27.387900 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" (UID: "8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:27 crc kubenswrapper[4811]: I1203 00:29:27.464455 4811 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:31 crc kubenswrapper[4811]: I1203 00:29:31.856036 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q"] Dec 03 00:29:31 crc kubenswrapper[4811]: E1203 00:29:31.856633 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" containerName="git-clone" Dec 03 00:29:31 crc kubenswrapper[4811]: I1203 00:29:31.856647 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" containerName="git-clone" Dec 03 00:29:31 crc kubenswrapper[4811]: E1203 00:29:31.856654 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" containerName="docker-build" Dec 03 00:29:31 crc kubenswrapper[4811]: I1203 00:29:31.856660 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" containerName="docker-build" Dec 03 00:29:31 crc kubenswrapper[4811]: E1203 00:29:31.856673 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" containerName="manage-dockerfile" Dec 03 00:29:31 crc kubenswrapper[4811]: I1203 00:29:31.856680 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" containerName="manage-dockerfile" Dec 03 00:29:31 crc kubenswrapper[4811]: I1203 00:29:31.856777 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bbe893f-2ef6-4d4d-ab2b-0ac678eb99e5" containerName="docker-build" Dec 03 00:29:31 crc kubenswrapper[4811]: I1203 00:29:31.857229 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q" Dec 03 00:29:31 crc kubenswrapper[4811]: I1203 00:29:31.859737 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-htw82" Dec 03 00:29:31 crc kubenswrapper[4811]: I1203 00:29:31.876064 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q"] Dec 03 00:29:31 crc kubenswrapper[4811]: I1203 00:29:31.947651 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdsz4\" (UniqueName: \"kubernetes.io/projected/576300b3-9d27-41c9-83eb-23cc9560c2d6-kube-api-access-sdsz4\") pod \"smart-gateway-operator-58f46bc8bd-x9c5q\" (UID: \"576300b3-9d27-41c9-83eb-23cc9560c2d6\") " pod="service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q" Dec 03 00:29:31 crc kubenswrapper[4811]: I1203 00:29:31.948022 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/576300b3-9d27-41c9-83eb-23cc9560c2d6-runner\") pod \"smart-gateway-operator-58f46bc8bd-x9c5q\" (UID: \"576300b3-9d27-41c9-83eb-23cc9560c2d6\") " pod="service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q" Dec 03 00:29:32 crc kubenswrapper[4811]: I1203 00:29:32.049200 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/576300b3-9d27-41c9-83eb-23cc9560c2d6-runner\") pod \"smart-gateway-operator-58f46bc8bd-x9c5q\" (UID: \"576300b3-9d27-41c9-83eb-23cc9560c2d6\") " pod="service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q" Dec 03 00:29:32 crc kubenswrapper[4811]: I1203 00:29:32.049684 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/576300b3-9d27-41c9-83eb-23cc9560c2d6-runner\") pod \"smart-gateway-operator-58f46bc8bd-x9c5q\" (UID: \"576300b3-9d27-41c9-83eb-23cc9560c2d6\") " pod="service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q" Dec 03 00:29:32 crc kubenswrapper[4811]: I1203 00:29:32.049823 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdsz4\" (UniqueName: \"kubernetes.io/projected/576300b3-9d27-41c9-83eb-23cc9560c2d6-kube-api-access-sdsz4\") pod \"smart-gateway-operator-58f46bc8bd-x9c5q\" (UID: \"576300b3-9d27-41c9-83eb-23cc9560c2d6\") " pod="service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q" Dec 03 00:29:32 crc kubenswrapper[4811]: I1203 00:29:32.075412 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdsz4\" (UniqueName: \"kubernetes.io/projected/576300b3-9d27-41c9-83eb-23cc9560c2d6-kube-api-access-sdsz4\") pod \"smart-gateway-operator-58f46bc8bd-x9c5q\" (UID: \"576300b3-9d27-41c9-83eb-23cc9560c2d6\") " pod="service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q" Dec 03 00:29:32 crc kubenswrapper[4811]: I1203 00:29:32.175439 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q" Dec 03 00:29:32 crc kubenswrapper[4811]: I1203 00:29:32.410889 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q"] Dec 03 00:29:32 crc kubenswrapper[4811]: I1203 00:29:32.423471 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 00:29:32 crc kubenswrapper[4811]: I1203 00:29:32.940612 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:29:32 crc kubenswrapper[4811]: I1203 00:29:32.941019 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:29:33 crc kubenswrapper[4811]: I1203 00:29:33.311383 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q" event={"ID":"576300b3-9d27-41c9-83eb-23cc9560c2d6","Type":"ContainerStarted","Data":"b31e40a1317991af034883db2e6cb8ed46c34f6a52ca7b8893099e3b71238c63"} Dec 03 00:29:38 crc kubenswrapper[4811]: I1203 00:29:38.243416 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-69c7c84c86-hgqhk"] Dec 03 00:29:38 crc kubenswrapper[4811]: I1203 00:29:38.244559 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-69c7c84c86-hgqhk" Dec 03 00:29:38 crc kubenswrapper[4811]: I1203 00:29:38.247754 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-x4vks" Dec 03 00:29:38 crc kubenswrapper[4811]: I1203 00:29:38.260413 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-69c7c84c86-hgqhk"] Dec 03 00:29:38 crc kubenswrapper[4811]: I1203 00:29:38.345338 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pmqj\" (UniqueName: \"kubernetes.io/projected/c4e434c3-79d8-4e80-8b6d-313e8cdf633e-kube-api-access-4pmqj\") pod \"service-telemetry-operator-69c7c84c86-hgqhk\" (UID: \"c4e434c3-79d8-4e80-8b6d-313e8cdf633e\") " pod="service-telemetry/service-telemetry-operator-69c7c84c86-hgqhk" Dec 03 00:29:38 crc kubenswrapper[4811]: I1203 00:29:38.345423 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c4e434c3-79d8-4e80-8b6d-313e8cdf633e-runner\") pod \"service-telemetry-operator-69c7c84c86-hgqhk\" (UID: \"c4e434c3-79d8-4e80-8b6d-313e8cdf633e\") " pod="service-telemetry/service-telemetry-operator-69c7c84c86-hgqhk" Dec 03 00:29:38 crc kubenswrapper[4811]: I1203 00:29:38.446730 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c4e434c3-79d8-4e80-8b6d-313e8cdf633e-runner\") pod \"service-telemetry-operator-69c7c84c86-hgqhk\" (UID: \"c4e434c3-79d8-4e80-8b6d-313e8cdf633e\") " pod="service-telemetry/service-telemetry-operator-69c7c84c86-hgqhk" Dec 03 00:29:38 crc kubenswrapper[4811]: I1203 00:29:38.447056 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pmqj\" (UniqueName: \"kubernetes.io/projected/c4e434c3-79d8-4e80-8b6d-313e8cdf633e-kube-api-access-4pmqj\") pod \"service-telemetry-operator-69c7c84c86-hgqhk\" (UID: \"c4e434c3-79d8-4e80-8b6d-313e8cdf633e\") " pod="service-telemetry/service-telemetry-operator-69c7c84c86-hgqhk" Dec 03 00:29:38 crc kubenswrapper[4811]: I1203 00:29:38.447384 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c4e434c3-79d8-4e80-8b6d-313e8cdf633e-runner\") pod \"service-telemetry-operator-69c7c84c86-hgqhk\" (UID: \"c4e434c3-79d8-4e80-8b6d-313e8cdf633e\") " pod="service-telemetry/service-telemetry-operator-69c7c84c86-hgqhk" Dec 03 00:29:38 crc kubenswrapper[4811]: I1203 00:29:38.474985 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pmqj\" (UniqueName: \"kubernetes.io/projected/c4e434c3-79d8-4e80-8b6d-313e8cdf633e-kube-api-access-4pmqj\") pod \"service-telemetry-operator-69c7c84c86-hgqhk\" (UID: \"c4e434c3-79d8-4e80-8b6d-313e8cdf633e\") " pod="service-telemetry/service-telemetry-operator-69c7c84c86-hgqhk" Dec 03 00:29:38 crc kubenswrapper[4811]: I1203 00:29:38.573390 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-69c7c84c86-hgqhk" Dec 03 00:29:45 crc kubenswrapper[4811]: I1203 00:29:45.455413 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-69c7c84c86-hgqhk"] Dec 03 00:29:47 crc kubenswrapper[4811]: W1203 00:29:47.678660 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4e434c3_79d8_4e80_8b6d_313e8cdf633e.slice/crio-2392fc540adbab3a1ef4ad2649a88f66cb6eefecd87c3dcff94ae2f802eb8f8b WatchSource:0}: Error finding container 2392fc540adbab3a1ef4ad2649a88f66cb6eefecd87c3dcff94ae2f802eb8f8b: Status 404 returned error can't find the container with id 2392fc540adbab3a1ef4ad2649a88f66cb6eefecd87c3dcff94ae2f802eb8f8b Dec 03 00:29:48 crc kubenswrapper[4811]: E1203 00:29:48.051215 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/smart-gateway-operator:latest" Dec 03 00:29:48 crc kubenswrapper[4811]: E1203 00:29:48.051588 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/smart-gateway-operator:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:smart-gateway-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY_SMARTGATEWAY_SMARTGATEWAY_INFRA_WATCH,Value:4,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DEBUG_LOGS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CORE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BRIDGE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-bridge:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:smart-gateway-operator.v5.0.1764721768,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdsz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod smart-gateway-operator-58f46bc8bd-x9c5q_service-telemetry(576300b3-9d27-41c9-83eb-23cc9560c2d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 00:29:48 crc kubenswrapper[4811]: E1203 00:29:48.052943 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q" podUID="576300b3-9d27-41c9-83eb-23cc9560c2d6" Dec 03 00:29:48 crc kubenswrapper[4811]: I1203 00:29:48.429186 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-69c7c84c86-hgqhk" event={"ID":"c4e434c3-79d8-4e80-8b6d-313e8cdf633e","Type":"ContainerStarted","Data":"2392fc540adbab3a1ef4ad2649a88f66cb6eefecd87c3dcff94ae2f802eb8f8b"} Dec 03 00:29:48 crc kubenswrapper[4811]: E1203 00:29:48.433102 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/smart-gateway-operator:latest\\\"\"" pod="service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q" podUID="576300b3-9d27-41c9-83eb-23cc9560c2d6" Dec 03 00:29:52 crc kubenswrapper[4811]: I1203 00:29:52.456669 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-69c7c84c86-hgqhk" event={"ID":"c4e434c3-79d8-4e80-8b6d-313e8cdf633e","Type":"ContainerStarted","Data":"2c47f8ac66d097da298e7f2b863c083fd4e4340b075f682eed67677431ba16a5"} Dec 03 00:29:52 crc kubenswrapper[4811]: I1203 00:29:52.478588 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-69c7c84c86-hgqhk" podStartSLOduration=10.496495569 podStartE2EDuration="14.478558223s" podCreationTimestamp="2025-12-03 00:29:38 +0000 UTC" firstStartedPulling="2025-12-03 00:29:47.681446289 +0000 UTC m=+1427.823275801" lastFinishedPulling="2025-12-03 00:29:51.663508983 +0000 UTC m=+1431.805338455" observedRunningTime="2025-12-03 00:29:52.475627882 +0000 UTC m=+1432.617457354" watchObservedRunningTime="2025-12-03 00:29:52.478558223 +0000 UTC m=+1432.620387725" Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.167947 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt"] Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.171432 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.178586 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.178857 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.179822 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt"] Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.259331 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e647a6d-a92d-4bc7-99ac-b1202999133a-config-volume\") pod \"collect-profiles-29412030-rs7qt\" (UID: \"8e647a6d-a92d-4bc7-99ac-b1202999133a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.259442 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e647a6d-a92d-4bc7-99ac-b1202999133a-secret-volume\") pod \"collect-profiles-29412030-rs7qt\" (UID: \"8e647a6d-a92d-4bc7-99ac-b1202999133a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.259668 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2fnf\" (UniqueName: \"kubernetes.io/projected/8e647a6d-a92d-4bc7-99ac-b1202999133a-kube-api-access-j2fnf\") pod \"collect-profiles-29412030-rs7qt\" (UID: \"8e647a6d-a92d-4bc7-99ac-b1202999133a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.361196 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2fnf\" (UniqueName: \"kubernetes.io/projected/8e647a6d-a92d-4bc7-99ac-b1202999133a-kube-api-access-j2fnf\") pod \"collect-profiles-29412030-rs7qt\" (UID: \"8e647a6d-a92d-4bc7-99ac-b1202999133a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.361295 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e647a6d-a92d-4bc7-99ac-b1202999133a-config-volume\") pod \"collect-profiles-29412030-rs7qt\" (UID: \"8e647a6d-a92d-4bc7-99ac-b1202999133a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.361355 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e647a6d-a92d-4bc7-99ac-b1202999133a-secret-volume\") pod \"collect-profiles-29412030-rs7qt\" (UID: \"8e647a6d-a92d-4bc7-99ac-b1202999133a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.362154 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e647a6d-a92d-4bc7-99ac-b1202999133a-config-volume\") pod \"collect-profiles-29412030-rs7qt\" (UID: \"8e647a6d-a92d-4bc7-99ac-b1202999133a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.381454 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e647a6d-a92d-4bc7-99ac-b1202999133a-secret-volume\") pod \"collect-profiles-29412030-rs7qt\" (UID: \"8e647a6d-a92d-4bc7-99ac-b1202999133a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.387914 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2fnf\" (UniqueName: \"kubernetes.io/projected/8e647a6d-a92d-4bc7-99ac-b1202999133a-kube-api-access-j2fnf\") pod \"collect-profiles-29412030-rs7qt\" (UID: \"8e647a6d-a92d-4bc7-99ac-b1202999133a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.495953 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" Dec 03 00:30:00 crc kubenswrapper[4811]: I1203 00:30:00.988252 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt"] Dec 03 00:30:01 crc kubenswrapper[4811]: W1203 00:30:01.009012 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e647a6d_a92d_4bc7_99ac_b1202999133a.slice/crio-b5db7e36048caa649adc3dad80b9230b4a18d81ea585a9ba5fd8bbd989305e5a WatchSource:0}: Error finding container b5db7e36048caa649adc3dad80b9230b4a18d81ea585a9ba5fd8bbd989305e5a: Status 404 returned error can't find the container with id b5db7e36048caa649adc3dad80b9230b4a18d81ea585a9ba5fd8bbd989305e5a Dec 03 00:30:01 crc kubenswrapper[4811]: I1203 00:30:01.544003 4811 generic.go:334] "Generic (PLEG): container finished" podID="8e647a6d-a92d-4bc7-99ac-b1202999133a" containerID="7092e1a3e56b3fe7e647f780c8a48c48cdcdbb285d33793f957091a96234dfd9" exitCode=0 Dec 03 00:30:01 crc kubenswrapper[4811]: I1203 00:30:01.544180 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" event={"ID":"8e647a6d-a92d-4bc7-99ac-b1202999133a","Type":"ContainerDied","Data":"7092e1a3e56b3fe7e647f780c8a48c48cdcdbb285d33793f957091a96234dfd9"} Dec 03 00:30:01 crc kubenswrapper[4811]: I1203 00:30:01.545214 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" event={"ID":"8e647a6d-a92d-4bc7-99ac-b1202999133a","Type":"ContainerStarted","Data":"b5db7e36048caa649adc3dad80b9230b4a18d81ea585a9ba5fd8bbd989305e5a"} Dec 03 00:30:02 crc kubenswrapper[4811]: I1203 00:30:02.822460 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" Dec 03 00:30:02 crc kubenswrapper[4811]: I1203 00:30:02.897105 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e647a6d-a92d-4bc7-99ac-b1202999133a-secret-volume\") pod \"8e647a6d-a92d-4bc7-99ac-b1202999133a\" (UID: \"8e647a6d-a92d-4bc7-99ac-b1202999133a\") " Dec 03 00:30:02 crc kubenswrapper[4811]: I1203 00:30:02.897659 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e647a6d-a92d-4bc7-99ac-b1202999133a-config-volume\") pod \"8e647a6d-a92d-4bc7-99ac-b1202999133a\" (UID: \"8e647a6d-a92d-4bc7-99ac-b1202999133a\") " Dec 03 00:30:02 crc kubenswrapper[4811]: I1203 00:30:02.897759 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2fnf\" (UniqueName: \"kubernetes.io/projected/8e647a6d-a92d-4bc7-99ac-b1202999133a-kube-api-access-j2fnf\") pod \"8e647a6d-a92d-4bc7-99ac-b1202999133a\" (UID: \"8e647a6d-a92d-4bc7-99ac-b1202999133a\") " Dec 03 00:30:02 crc kubenswrapper[4811]: I1203 00:30:02.898482 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e647a6d-a92d-4bc7-99ac-b1202999133a-config-volume" (OuterVolumeSpecName: "config-volume") pod "8e647a6d-a92d-4bc7-99ac-b1202999133a" (UID: "8e647a6d-a92d-4bc7-99ac-b1202999133a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:30:02 crc kubenswrapper[4811]: I1203 00:30:02.903034 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e647a6d-a92d-4bc7-99ac-b1202999133a-kube-api-access-j2fnf" (OuterVolumeSpecName: "kube-api-access-j2fnf") pod "8e647a6d-a92d-4bc7-99ac-b1202999133a" (UID: "8e647a6d-a92d-4bc7-99ac-b1202999133a"). InnerVolumeSpecName "kube-api-access-j2fnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:30:02 crc kubenswrapper[4811]: I1203 00:30:02.903147 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e647a6d-a92d-4bc7-99ac-b1202999133a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8e647a6d-a92d-4bc7-99ac-b1202999133a" (UID: "8e647a6d-a92d-4bc7-99ac-b1202999133a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:30:02 crc kubenswrapper[4811]: I1203 00:30:02.940847 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:30:02 crc kubenswrapper[4811]: I1203 00:30:02.940935 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:30:02 crc kubenswrapper[4811]: I1203 00:30:02.999079 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e647a6d-a92d-4bc7-99ac-b1202999133a-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:02 crc kubenswrapper[4811]: I1203 00:30:02.999140 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2fnf\" (UniqueName: \"kubernetes.io/projected/8e647a6d-a92d-4bc7-99ac-b1202999133a-kube-api-access-j2fnf\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:02 crc kubenswrapper[4811]: I1203 00:30:02.999161 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e647a6d-a92d-4bc7-99ac-b1202999133a-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:03 crc kubenswrapper[4811]: I1203 00:30:03.556705 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q" event={"ID":"576300b3-9d27-41c9-83eb-23cc9560c2d6","Type":"ContainerStarted","Data":"022fba2e7809ca633d6f217f1ad732031d9eaa6f2623b544bf9c9843fad7690b"} Dec 03 00:30:03 crc kubenswrapper[4811]: I1203 00:30:03.557852 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" event={"ID":"8e647a6d-a92d-4bc7-99ac-b1202999133a","Type":"ContainerDied","Data":"b5db7e36048caa649adc3dad80b9230b4a18d81ea585a9ba5fd8bbd989305e5a"} Dec 03 00:30:03 crc kubenswrapper[4811]: I1203 00:30:03.557901 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5db7e36048caa649adc3dad80b9230b4a18d81ea585a9ba5fd8bbd989305e5a" Dec 03 00:30:03 crc kubenswrapper[4811]: I1203 00:30:03.557958 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-rs7qt" Dec 03 00:30:03 crc kubenswrapper[4811]: I1203 00:30:03.855966 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-58f46bc8bd-x9c5q" podStartSLOduration=2.608508742 podStartE2EDuration="32.855946711s" podCreationTimestamp="2025-12-03 00:29:31 +0000 UTC" firstStartedPulling="2025-12-03 00:29:32.423216005 +0000 UTC m=+1412.565045477" lastFinishedPulling="2025-12-03 00:30:02.670653984 +0000 UTC m=+1442.812483446" observedRunningTime="2025-12-03 00:30:03.580562672 +0000 UTC m=+1443.722392144" watchObservedRunningTime="2025-12-03 00:30:03.855946711 +0000 UTC m=+1443.997776183" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.310934 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-tmpsp"] Dec 03 00:30:18 crc kubenswrapper[4811]: E1203 00:30:18.311673 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e647a6d-a92d-4bc7-99ac-b1202999133a" containerName="collect-profiles" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.311690 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e647a6d-a92d-4bc7-99ac-b1202999133a" containerName="collect-profiles" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.311809 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e647a6d-a92d-4bc7-99ac-b1202999133a" containerName="collect-profiles" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.312299 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.316466 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.316934 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.317326 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.317432 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-xqfg7" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.319488 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.319682 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.319716 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.335922 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-tmpsp"] Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.438227 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.438388 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/949f6876-d78f-49fd-b12b-80d0be1ede0b-sasl-config\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.438455 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76mqq\" (UniqueName: \"kubernetes.io/projected/949f6876-d78f-49fd-b12b-80d0be1ede0b-kube-api-access-76mqq\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.438516 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.438541 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.438761 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-sasl-users\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.438998 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.540098 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.540162 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/949f6876-d78f-49fd-b12b-80d0be1ede0b-sasl-config\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.540203 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76mqq\" (UniqueName: \"kubernetes.io/projected/949f6876-d78f-49fd-b12b-80d0be1ede0b-kube-api-access-76mqq\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.540245 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.540346 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.540383 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-sasl-users\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.540429 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.541566 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/949f6876-d78f-49fd-b12b-80d0be1ede0b-sasl-config\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.547011 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.547013 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.548308 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-sasl-users\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.548429 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.551412 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.564466 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76mqq\" (UniqueName: \"kubernetes.io/projected/949f6876-d78f-49fd-b12b-80d0be1ede0b-kube-api-access-76mqq\") pod \"default-interconnect-68864d46cb-tmpsp\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.628232 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:30:18 crc kubenswrapper[4811]: I1203 00:30:18.858340 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-tmpsp"] Dec 03 00:30:19 crc kubenswrapper[4811]: I1203 00:30:19.684330 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" event={"ID":"949f6876-d78f-49fd-b12b-80d0be1ede0b","Type":"ContainerStarted","Data":"1395c05a768201ec0ee7691ded7fe650e181e6df148137dc376e9397783d7c39"} Dec 03 00:30:31 crc kubenswrapper[4811]: I1203 00:30:31.766976 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" event={"ID":"949f6876-d78f-49fd-b12b-80d0be1ede0b","Type":"ContainerStarted","Data":"0e8cc345a9c59779ddf8e762f64534e18a15e30a2841ccec6631e243b94e244f"} Dec 03 00:30:31 crc kubenswrapper[4811]: I1203 00:30:31.789019 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" podStartSLOduration=2.026429376 podStartE2EDuration="13.788994314s" podCreationTimestamp="2025-12-03 00:30:18 +0000 UTC" firstStartedPulling="2025-12-03 00:30:18.864884264 +0000 UTC m=+1459.006713746" lastFinishedPulling="2025-12-03 00:30:30.627449212 +0000 UTC m=+1470.769278684" observedRunningTime="2025-12-03 00:30:31.783070561 +0000 UTC m=+1471.924900033" watchObservedRunningTime="2025-12-03 00:30:31.788994314 +0000 UTC m=+1471.930823786" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.018671 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.020626 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.023136 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.023509 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-mq6ww" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.023687 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.023859 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.024537 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.025163 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.025314 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.025363 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.040110 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.126922 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.126986 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.127015 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-config-out\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.127079 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-web-config\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.127117 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-tls-assets\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.127339 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.127474 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-config\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.127519 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97mzr\" (UniqueName: \"kubernetes.io/projected/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-kube-api-access-97mzr\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.127660 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.127794 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-23103dc1-5b21-46cd-9cbd-b4e24e77c30e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23103dc1-5b21-46cd-9cbd-b4e24e77c30e\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.229221 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.229313 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-config\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.229341 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97mzr\" (UniqueName: \"kubernetes.io/projected/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-kube-api-access-97mzr\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.229381 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.229429 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-23103dc1-5b21-46cd-9cbd-b4e24e77c30e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23103dc1-5b21-46cd-9cbd-b4e24e77c30e\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.229469 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.229498 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.229525 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-config-out\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.229571 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-web-config\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.229600 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-tls-assets\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: E1203 00:30:32.229659 4811 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 03 00:30:32 crc kubenswrapper[4811]: E1203 00:30:32.229778 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-secret-default-prometheus-proxy-tls podName:e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e nodeName:}" failed. No retries permitted until 2025-12-03 00:30:32.729748889 +0000 UTC m=+1472.871578361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e") : secret "default-prometheus-proxy-tls" not found Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.230761 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.230963 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.236745 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-web-config\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.236768 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-tls-assets\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.239369 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-config\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.241276 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.241325 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-23103dc1-5b21-46cd-9cbd-b4e24e77c30e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23103dc1-5b21-46cd-9cbd-b4e24e77c30e\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6331e842afa01b6a3b11bde4c4fb037a8547a7c45a049a8b591315a62e2c8938/globalmount\"" pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.241332 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.243607 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-config-out\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.275494 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97mzr\" (UniqueName: \"kubernetes.io/projected/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-kube-api-access-97mzr\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.285074 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-23103dc1-5b21-46cd-9cbd-b4e24e77c30e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23103dc1-5b21-46cd-9cbd-b4e24e77c30e\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.735357 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:32 crc kubenswrapper[4811]: E1203 00:30:32.735569 4811 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 03 00:30:32 crc kubenswrapper[4811]: E1203 00:30:32.735640 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-secret-default-prometheus-proxy-tls podName:e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e nodeName:}" failed. No retries permitted until 2025-12-03 00:30:33.73562278 +0000 UTC m=+1473.877452252 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e") : secret "default-prometheus-proxy-tls" not found Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.939914 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.940336 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.940402 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.941337 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da6ab7c89c73f34f2e196fadad96f85f0b4d6e41e72b418a78dc01f58cbadf17"} pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:30:32 crc kubenswrapper[4811]: I1203 00:30:32.941438 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" containerID="cri-o://da6ab7c89c73f34f2e196fadad96f85f0b4d6e41e72b418a78dc01f58cbadf17" gracePeriod=600 Dec 03 00:30:33 crc kubenswrapper[4811]: I1203 00:30:33.747986 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:33 crc kubenswrapper[4811]: I1203 00:30:33.774639 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:30:33 crc kubenswrapper[4811]: I1203 00:30:33.786030 4811 generic.go:334] "Generic (PLEG): container finished" podID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerID="da6ab7c89c73f34f2e196fadad96f85f0b4d6e41e72b418a78dc01f58cbadf17" exitCode=0 Dec 03 00:30:33 crc kubenswrapper[4811]: I1203 00:30:33.786093 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerDied","Data":"da6ab7c89c73f34f2e196fadad96f85f0b4d6e41e72b418a78dc01f58cbadf17"} Dec 03 00:30:33 crc kubenswrapper[4811]: I1203 00:30:33.786126 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerStarted","Data":"405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f"} Dec 03 00:30:33 crc kubenswrapper[4811]: I1203 00:30:33.786144 4811 scope.go:117] "RemoveContainer" containerID="f4d25db9c9ac1df29df1bbcee3a02169fd962097c68bac9c7311fb3f69dcdc76" Dec 03 00:30:33 crc kubenswrapper[4811]: I1203 00:30:33.842554 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 03 00:30:34 crc kubenswrapper[4811]: I1203 00:30:34.286718 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 03 00:30:34 crc kubenswrapper[4811]: W1203 00:30:34.291364 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode46e9cee_1ba4_434c_a39e_f42b7ed9dc7e.slice/crio-2be84b59fb6969218e2e6331ff3178ae7ab18d9b9a0d0b3dec6674db89516121 WatchSource:0}: Error finding container 2be84b59fb6969218e2e6331ff3178ae7ab18d9b9a0d0b3dec6674db89516121: Status 404 returned error can't find the container with id 2be84b59fb6969218e2e6331ff3178ae7ab18d9b9a0d0b3dec6674db89516121 Dec 03 00:30:34 crc kubenswrapper[4811]: I1203 00:30:34.800160 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e","Type":"ContainerStarted","Data":"2be84b59fb6969218e2e6331ff3178ae7ab18d9b9a0d0b3dec6674db89516121"} Dec 03 00:30:39 crc kubenswrapper[4811]: I1203 00:30:39.848782 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e","Type":"ContainerStarted","Data":"6c56240171868bf62aa32a766d8983bf790d2e3ad691a6b7c9dfa6cbe853fc2b"} Dec 03 00:30:42 crc kubenswrapper[4811]: I1203 00:30:42.390957 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-dgvqc"] Dec 03 00:30:42 crc kubenswrapper[4811]: I1203 00:30:42.392701 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-dgvqc" Dec 03 00:30:42 crc kubenswrapper[4811]: I1203 00:30:42.403578 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-dgvqc"] Dec 03 00:30:42 crc kubenswrapper[4811]: I1203 00:30:42.565009 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzpwp\" (UniqueName: \"kubernetes.io/projected/13eccb10-6229-4374-a790-e8677f1438dd-kube-api-access-gzpwp\") pod \"default-snmp-webhook-6856cfb745-dgvqc\" (UID: \"13eccb10-6229-4374-a790-e8677f1438dd\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-dgvqc" Dec 03 00:30:42 crc kubenswrapper[4811]: I1203 00:30:42.667381 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzpwp\" (UniqueName: \"kubernetes.io/projected/13eccb10-6229-4374-a790-e8677f1438dd-kube-api-access-gzpwp\") pod \"default-snmp-webhook-6856cfb745-dgvqc\" (UID: \"13eccb10-6229-4374-a790-e8677f1438dd\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-dgvqc" Dec 03 00:30:42 crc kubenswrapper[4811]: I1203 00:30:42.690205 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzpwp\" (UniqueName: \"kubernetes.io/projected/13eccb10-6229-4374-a790-e8677f1438dd-kube-api-access-gzpwp\") pod \"default-snmp-webhook-6856cfb745-dgvqc\" (UID: \"13eccb10-6229-4374-a790-e8677f1438dd\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-dgvqc" Dec 03 00:30:42 crc kubenswrapper[4811]: I1203 00:30:42.710632 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-dgvqc" Dec 03 00:30:43 crc kubenswrapper[4811]: I1203 00:30:43.147154 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-dgvqc"] Dec 03 00:30:43 crc kubenswrapper[4811]: I1203 00:30:43.898639 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-dgvqc" event={"ID":"13eccb10-6229-4374-a790-e8677f1438dd","Type":"ContainerStarted","Data":"e9ffa52548a1cabb5078ccc320a7f78178c19b9d37e40e5eb4cb27c62680c826"} Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.814872 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.816112 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.823079 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.823150 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.823614 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.823723 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.823949 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.831698 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-pt64j" Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.844054 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.932080 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-web-config\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.932135 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.932189 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.932243 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46e46bb8-2d4c-4142-b0bc-38b8daf2614c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46e46bb8-2d4c-4142-b0bc-38b8daf2614c\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.932285 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-config-volume\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.932340 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.932362 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b624c13e-3b23-45ef-9f45-93a6dc621cf0-config-out\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.932390 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b624c13e-3b23-45ef-9f45-93a6dc621cf0-tls-assets\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:45 crc kubenswrapper[4811]: I1203 00:30:45.932559 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfd2d\" (UniqueName: \"kubernetes.io/projected/b624c13e-3b23-45ef-9f45-93a6dc621cf0-kube-api-access-mfd2d\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.034184 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-web-config\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.034284 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.034343 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.034370 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46e46bb8-2d4c-4142-b0bc-38b8daf2614c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46e46bb8-2d4c-4142-b0bc-38b8daf2614c\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.034397 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-config-volume\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.034442 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.034459 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b624c13e-3b23-45ef-9f45-93a6dc621cf0-config-out\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.034483 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b624c13e-3b23-45ef-9f45-93a6dc621cf0-tls-assets\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.034513 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfd2d\" (UniqueName: \"kubernetes.io/projected/b624c13e-3b23-45ef-9f45-93a6dc621cf0-kube-api-access-mfd2d\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: E1203 00:30:46.041136 4811 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 03 00:30:46 crc kubenswrapper[4811]: E1203 00:30:46.041501 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-secret-default-alertmanager-proxy-tls podName:b624c13e-3b23-45ef-9f45-93a6dc621cf0 nodeName:}" failed. No retries permitted until 2025-12-03 00:30:46.541469164 +0000 UTC m=+1486.683298636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "b624c13e-3b23-45ef-9f45-93a6dc621cf0") : secret "default-alertmanager-proxy-tls" not found Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.046455 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b624c13e-3b23-45ef-9f45-93a6dc621cf0-config-out\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.047030 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-config-volume\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.048167 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.048926 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-web-config\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.049726 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.049766 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-46e46bb8-2d4c-4142-b0bc-38b8daf2614c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46e46bb8-2d4c-4142-b0bc-38b8daf2614c\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8e7172107938c454ede290a4599f848b4edf9837a84fb3e3ab4e65ba85ed5d8d/globalmount\"" pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.051912 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b624c13e-3b23-45ef-9f45-93a6dc621cf0-tls-assets\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.060152 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfd2d\" (UniqueName: \"kubernetes.io/projected/b624c13e-3b23-45ef-9f45-93a6dc621cf0-kube-api-access-mfd2d\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.060671 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.088702 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-46e46bb8-2d4c-4142-b0bc-38b8daf2614c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-46e46bb8-2d4c-4142-b0bc-38b8daf2614c\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.550624 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:46 crc kubenswrapper[4811]: E1203 00:30:46.550834 4811 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 03 00:30:46 crc kubenswrapper[4811]: E1203 00:30:46.550948 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-secret-default-alertmanager-proxy-tls podName:b624c13e-3b23-45ef-9f45-93a6dc621cf0 nodeName:}" failed. No retries permitted until 2025-12-03 00:30:47.550927873 +0000 UTC m=+1487.692757345 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "b624c13e-3b23-45ef-9f45-93a6dc621cf0") : secret "default-alertmanager-proxy-tls" not found Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.924454 4811 generic.go:334] "Generic (PLEG): container finished" podID="e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e" containerID="6c56240171868bf62aa32a766d8983bf790d2e3ad691a6b7c9dfa6cbe853fc2b" exitCode=0 Dec 03 00:30:46 crc kubenswrapper[4811]: I1203 00:30:46.924522 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e","Type":"ContainerDied","Data":"6c56240171868bf62aa32a766d8983bf790d2e3ad691a6b7c9dfa6cbe853fc2b"} Dec 03 00:30:47 crc kubenswrapper[4811]: I1203 00:30:47.567136 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:47 crc kubenswrapper[4811]: E1203 00:30:47.567366 4811 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 03 00:30:47 crc kubenswrapper[4811]: E1203 00:30:47.567849 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-secret-default-alertmanager-proxy-tls podName:b624c13e-3b23-45ef-9f45-93a6dc621cf0 nodeName:}" failed. No retries permitted until 2025-12-03 00:30:49.567822371 +0000 UTC m=+1489.709651843 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "b624c13e-3b23-45ef-9f45-93a6dc621cf0") : secret "default-alertmanager-proxy-tls" not found Dec 03 00:30:49 crc kubenswrapper[4811]: I1203 00:30:49.599326 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:49 crc kubenswrapper[4811]: I1203 00:30:49.606731 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b624c13e-3b23-45ef-9f45-93a6dc621cf0-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"b624c13e-3b23-45ef-9f45-93a6dc621cf0\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:49 crc kubenswrapper[4811]: I1203 00:30:49.737272 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 03 00:30:50 crc kubenswrapper[4811]: I1203 00:30:50.294238 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 03 00:30:50 crc kubenswrapper[4811]: W1203 00:30:50.382080 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb624c13e_3b23_45ef_9f45_93a6dc621cf0.slice/crio-fc4020a46fbb207a831ac3015c316fda64f072ec113e540a9e654a6e39ed5c48 WatchSource:0}: Error finding container fc4020a46fbb207a831ac3015c316fda64f072ec113e540a9e654a6e39ed5c48: Status 404 returned error can't find the container with id fc4020a46fbb207a831ac3015c316fda64f072ec113e540a9e654a6e39ed5c48 Dec 03 00:30:50 crc kubenswrapper[4811]: I1203 00:30:50.973662 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"b624c13e-3b23-45ef-9f45-93a6dc621cf0","Type":"ContainerStarted","Data":"fc4020a46fbb207a831ac3015c316fda64f072ec113e540a9e654a6e39ed5c48"} Dec 03 00:30:50 crc kubenswrapper[4811]: I1203 00:30:50.979511 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-dgvqc" event={"ID":"13eccb10-6229-4374-a790-e8677f1438dd","Type":"ContainerStarted","Data":"ae4c577e70d6918beced4e7f50bed24d66cfbc54801fe7a7dfc7f7e5ef6079f5"} Dec 03 00:30:50 crc kubenswrapper[4811]: I1203 00:30:50.997102 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6856cfb745-dgvqc" podStartSLOduration=2.115672705 podStartE2EDuration="8.997082095s" podCreationTimestamp="2025-12-03 00:30:42 +0000 UTC" firstStartedPulling="2025-12-03 00:30:43.151908793 +0000 UTC m=+1483.293738275" lastFinishedPulling="2025-12-03 00:30:50.033318193 +0000 UTC m=+1490.175147665" observedRunningTime="2025-12-03 00:30:50.992138055 +0000 UTC m=+1491.133967547" watchObservedRunningTime="2025-12-03 00:30:50.997082095 +0000 UTC m=+1491.138911567" Dec 03 00:30:52 crc kubenswrapper[4811]: I1203 00:30:52.997041 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"b624c13e-3b23-45ef-9f45-93a6dc621cf0","Type":"ContainerStarted","Data":"735d5c815381dba7748bbc7433438eb47960a4667ba0adaa2ce7957de943309a"} Dec 03 00:30:55 crc kubenswrapper[4811]: I1203 00:30:55.017717 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e","Type":"ContainerStarted","Data":"c9ee75033a5de9e7a32cf552b90d0b0be463b575678ceb106eb83925e5f10dbe"} Dec 03 00:30:57 crc kubenswrapper[4811]: I1203 00:30:57.032619 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e","Type":"ContainerStarted","Data":"fb9fc07473e64818f6d3b7498576e6636c8fa2c135183dc071beb1cfb6630db3"} Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.047058 4811 generic.go:334] "Generic (PLEG): container finished" podID="b624c13e-3b23-45ef-9f45-93a6dc621cf0" containerID="735d5c815381dba7748bbc7433438eb47960a4667ba0adaa2ce7957de943309a" exitCode=0 Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.047145 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"b624c13e-3b23-45ef-9f45-93a6dc621cf0","Type":"ContainerDied","Data":"735d5c815381dba7748bbc7433438eb47960a4667ba0adaa2ce7957de943309a"} Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.734626 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6"] Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.735960 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.741364 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.741411 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.741450 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.741503 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-xxsbb" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.748465 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6"] Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.879360 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.879423 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7zb7\" (UniqueName: \"kubernetes.io/projected/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-kube-api-access-w7zb7\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.879449 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.879478 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.879512 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.981228 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.981341 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:30:59 crc kubenswrapper[4811]: E1203 00:30:59.981466 4811 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 03 00:30:59 crc kubenswrapper[4811]: E1203 00:30:59.981561 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-default-cloud1-coll-meter-proxy-tls podName:16c2ef3e-9973-4bf8-a5ca-e8857c7d478b nodeName:}" failed. No retries permitted until 2025-12-03 00:31:00.48153761 +0000 UTC m=+1500.623367102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" (UID: "16c2ef3e-9973-4bf8-a5ca-e8857c7d478b") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.981800 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.981871 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.981916 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7zb7\" (UniqueName: \"kubernetes.io/projected/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-kube-api-access-w7zb7\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.981949 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.983357 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.987286 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:30:59 crc kubenswrapper[4811]: I1203 00:30:59.998922 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7zb7\" (UniqueName: \"kubernetes.io/projected/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-kube-api-access-w7zb7\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:31:00 crc kubenswrapper[4811]: I1203 00:31:00.489657 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:31:00 crc kubenswrapper[4811]: E1203 00:31:00.489986 4811 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 03 00:31:00 crc kubenswrapper[4811]: E1203 00:31:00.490061 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-default-cloud1-coll-meter-proxy-tls podName:16c2ef3e-9973-4bf8-a5ca-e8857c7d478b nodeName:}" failed. No retries permitted until 2025-12-03 00:31:01.490033145 +0000 UTC m=+1501.631862627 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" (UID: "16c2ef3e-9973-4bf8-a5ca-e8857c7d478b") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 03 00:31:01 crc kubenswrapper[4811]: I1203 00:31:01.505999 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:31:01 crc kubenswrapper[4811]: I1203 00:31:01.511222 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/16c2ef3e-9973-4bf8-a5ca-e8857c7d478b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6\" (UID: \"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:31:01 crc kubenswrapper[4811]: I1203 00:31:01.596086 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-xxsbb" Dec 03 00:31:01 crc kubenswrapper[4811]: I1203 00:31:01.604815 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.054129 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb"] Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.055509 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.058704 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.058746 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.068708 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb"] Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.129798 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fbf98c09-5d18-49b5-9222-d3e42c6de766-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.129894 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fbf98c09-5d18-49b5-9222-d3e42c6de766-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.129927 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fbf98c09-5d18-49b5-9222-d3e42c6de766-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.129954 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h75z\" (UniqueName: \"kubernetes.io/projected/fbf98c09-5d18-49b5-9222-d3e42c6de766-kube-api-access-7h75z\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.129988 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/fbf98c09-5d18-49b5-9222-d3e42c6de766-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.231186 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fbf98c09-5d18-49b5-9222-d3e42c6de766-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.231519 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fbf98c09-5d18-49b5-9222-d3e42c6de766-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.231630 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h75z\" (UniqueName: \"kubernetes.io/projected/fbf98c09-5d18-49b5-9222-d3e42c6de766-kube-api-access-7h75z\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: E1203 00:31:03.231388 4811 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 03 00:31:03 crc kubenswrapper[4811]: E1203 00:31:03.231861 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbf98c09-5d18-49b5-9222-d3e42c6de766-default-cloud1-ceil-meter-proxy-tls podName:fbf98c09-5d18-49b5-9222-d3e42c6de766 nodeName:}" failed. No retries permitted until 2025-12-03 00:31:03.731824928 +0000 UTC m=+1503.873654430 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/fbf98c09-5d18-49b5-9222-d3e42c6de766-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" (UID: "fbf98c09-5d18-49b5-9222-d3e42c6de766") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.231913 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/fbf98c09-5d18-49b5-9222-d3e42c6de766-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.232232 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fbf98c09-5d18-49b5-9222-d3e42c6de766-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.232589 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fbf98c09-5d18-49b5-9222-d3e42c6de766-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.235640 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fbf98c09-5d18-49b5-9222-d3e42c6de766-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.239608 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/fbf98c09-5d18-49b5-9222-d3e42c6de766-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.261161 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h75z\" (UniqueName: \"kubernetes.io/projected/fbf98c09-5d18-49b5-9222-d3e42c6de766-kube-api-access-7h75z\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: I1203 00:31:03.738839 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fbf98c09-5d18-49b5-9222-d3e42c6de766-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:03 crc kubenswrapper[4811]: E1203 00:31:03.739038 4811 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 03 00:31:03 crc kubenswrapper[4811]: E1203 00:31:03.739109 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbf98c09-5d18-49b5-9222-d3e42c6de766-default-cloud1-ceil-meter-proxy-tls podName:fbf98c09-5d18-49b5-9222-d3e42c6de766 nodeName:}" failed. No retries permitted until 2025-12-03 00:31:04.739089873 +0000 UTC m=+1504.880919345 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/fbf98c09-5d18-49b5-9222-d3e42c6de766-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" (UID: "fbf98c09-5d18-49b5-9222-d3e42c6de766") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 03 00:31:04 crc kubenswrapper[4811]: I1203 00:31:04.752894 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fbf98c09-5d18-49b5-9222-d3e42c6de766-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:04 crc kubenswrapper[4811]: E1203 00:31:04.753127 4811 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 03 00:31:04 crc kubenswrapper[4811]: E1203 00:31:04.753666 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbf98c09-5d18-49b5-9222-d3e42c6de766-default-cloud1-ceil-meter-proxy-tls podName:fbf98c09-5d18-49b5-9222-d3e42c6de766 nodeName:}" failed. No retries permitted until 2025-12-03 00:31:06.753635525 +0000 UTC m=+1506.895464997 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/fbf98c09-5d18-49b5-9222-d3e42c6de766-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" (UID: "fbf98c09-5d18-49b5-9222-d3e42c6de766") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 03 00:31:06 crc kubenswrapper[4811]: I1203 00:31:06.781670 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fbf98c09-5d18-49b5-9222-d3e42c6de766-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:06 crc kubenswrapper[4811]: I1203 00:31:06.788922 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fbf98c09-5d18-49b5-9222-d3e42c6de766-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb\" (UID: \"fbf98c09-5d18-49b5-9222-d3e42c6de766\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:06 crc kubenswrapper[4811]: I1203 00:31:06.975839 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.685963 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g"] Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.690331 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.695712 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g"] Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.697196 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.697834 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.818693 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.818736 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.818759 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfr4l\" (UniqueName: \"kubernetes.io/projected/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-kube-api-access-mfr4l\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.818844 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.818904 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.920664 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.920701 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfr4l\" (UniqueName: \"kubernetes.io/projected/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-kube-api-access-mfr4l\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.920735 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.920777 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.920843 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:08 crc kubenswrapper[4811]: E1203 00:31:08.920933 4811 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:31:08 crc kubenswrapper[4811]: E1203 00:31:08.921011 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-default-cloud1-sens-meter-proxy-tls podName:6f1318e7-8d9a-4324-9d53-4453fdd4d04e nodeName:}" failed. No retries permitted until 2025-12-03 00:31:09.420987343 +0000 UTC m=+1509.562816815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" (UID: "6f1318e7-8d9a-4324-9d53-4453fdd4d04e") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.921840 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.922104 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.934565 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:08 crc kubenswrapper[4811]: I1203 00:31:08.937712 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfr4l\" (UniqueName: \"kubernetes.io/projected/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-kube-api-access-mfr4l\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:09 crc kubenswrapper[4811]: E1203 00:31:09.026755 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/openshift/origin-oauth-proxy:latest" Dec 03 00:31:09 crc kubenswrapper[4811]: E1203 00:31:09.026903 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:oauth-proxy,Image:quay.io/openshift/origin-oauth-proxy:latest,Command:[],Args:[-https-address=:9092 -tls-cert=/etc/tls/private/tls.crt -tls-key=/etc/tls/private/tls.key -upstream=http://localhost:9090/ -cookie-secret-file=/etc/proxy/secrets/session_secret -openshift-service-account=prometheus-stf -openshift-sar={\"namespace\":\"service-telemetry\",\"resource\": \"prometheuses\", \"resourceAPIGroup\":\"monitoring.rhobs\", \"verb\":\"get\"} -openshift-delegate-urls={\"/\":{\"namespace\":\"service-telemetry\",\"resource\": \"prometheuses\", \"group\":\"monitoring.rhobs\", \"verb\":\"get\"}}],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:9092,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:secret-default-prometheus-proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-session-secret,ReadOnly:false,MountPath:/etc/proxy/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97mzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-default-0_service-telemetry(e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:31:09 crc kubenswrapper[4811]: E1203 00:31:09.028512 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/prometheus-default-0" podUID="e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e" Dec 03 00:31:09 crc kubenswrapper[4811]: E1203 00:31:09.132752 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/prometheus-default-0" podUID="e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e" Dec 03 00:31:09 crc kubenswrapper[4811]: I1203 00:31:09.428546 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:09 crc kubenswrapper[4811]: E1203 00:31:09.428769 4811 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:31:09 crc kubenswrapper[4811]: E1203 00:31:09.428861 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-default-cloud1-sens-meter-proxy-tls podName:6f1318e7-8d9a-4324-9d53-4453fdd4d04e nodeName:}" failed. No retries permitted until 2025-12-03 00:31:10.428839734 +0000 UTC m=+1510.570669206 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" (UID: "6f1318e7-8d9a-4324-9d53-4453fdd4d04e") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:31:09 crc kubenswrapper[4811]: I1203 00:31:09.490253 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6"] Dec 03 00:31:09 crc kubenswrapper[4811]: W1203 00:31:09.500221 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16c2ef3e_9973_4bf8_a5ca_e8857c7d478b.slice/crio-c7560b03f12913856f1b57edb8345ac6beba904f842be4f4bedace8acbacf3f7 WatchSource:0}: Error finding container c7560b03f12913856f1b57edb8345ac6beba904f842be4f4bedace8acbacf3f7: Status 404 returned error can't find the container with id c7560b03f12913856f1b57edb8345ac6beba904f842be4f4bedace8acbacf3f7 Dec 03 00:31:09 crc kubenswrapper[4811]: I1203 00:31:09.504227 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb"] Dec 03 00:31:10 crc kubenswrapper[4811]: I1203 00:31:10.146583 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" event={"ID":"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b","Type":"ContainerStarted","Data":"c7560b03f12913856f1b57edb8345ac6beba904f842be4f4bedace8acbacf3f7"} Dec 03 00:31:10 crc kubenswrapper[4811]: I1203 00:31:10.158558 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" event={"ID":"fbf98c09-5d18-49b5-9222-d3e42c6de766","Type":"ContainerStarted","Data":"9c845cfced048c6e6a4aa9238b9a5c0feb344dcdb00b7724fe3218add37496d7"} Dec 03 00:31:10 crc kubenswrapper[4811]: I1203 00:31:10.444792 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:10 crc kubenswrapper[4811]: I1203 00:31:10.457206 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f1318e7-8d9a-4324-9d53-4453fdd4d04e-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g\" (UID: \"6f1318e7-8d9a-4324-9d53-4453fdd4d04e\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:10 crc kubenswrapper[4811]: I1203 00:31:10.529119 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" Dec 03 00:31:11 crc kubenswrapper[4811]: I1203 00:31:11.341931 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g"] Dec 03 00:31:12 crc kubenswrapper[4811]: I1203 00:31:12.188045 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" event={"ID":"fbf98c09-5d18-49b5-9222-d3e42c6de766","Type":"ContainerStarted","Data":"df7faef68149906bba6920a9e00c3afa574d364f881db071c8c0d2a0a32c291d"} Dec 03 00:31:12 crc kubenswrapper[4811]: I1203 00:31:12.189988 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" event={"ID":"6f1318e7-8d9a-4324-9d53-4453fdd4d04e","Type":"ContainerStarted","Data":"9269b523c3cc248861c9c5bc28bf124b71989316ca6b08c6a752f47763bb154a"} Dec 03 00:31:12 crc kubenswrapper[4811]: I1203 00:31:12.190010 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" event={"ID":"6f1318e7-8d9a-4324-9d53-4453fdd4d04e","Type":"ContainerStarted","Data":"2efeb66cc769bc961c88de891329dabf4b572fa0d5b6d46295b6b8f7ebd19d56"} Dec 03 00:31:12 crc kubenswrapper[4811]: I1203 00:31:12.192428 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"b624c13e-3b23-45ef-9f45-93a6dc621cf0","Type":"ContainerStarted","Data":"1f15c5357841fbc27174f812e502cfed3992eb6ae6dd1a6fc42912b534db73b2"} Dec 03 00:31:12 crc kubenswrapper[4811]: I1203 00:31:12.195595 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" event={"ID":"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b","Type":"ContainerStarted","Data":"d53de141e6163756f4c62686208df20aabac7324f20190e76cc72fc618b52069"} Dec 03 00:31:13 crc kubenswrapper[4811]: I1203 00:31:13.235490 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"b624c13e-3b23-45ef-9f45-93a6dc621cf0","Type":"ContainerStarted","Data":"04e1a38e7598bca5d1abac6f702f965ba0f8a9c8582a01896a4b4970614ee342"} Dec 03 00:31:13 crc kubenswrapper[4811]: I1203 00:31:13.843091 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Dec 03 00:31:13 crc kubenswrapper[4811]: E1203 00:31:13.850826 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/prometheus-default-0" podUID="e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e" Dec 03 00:31:14 crc kubenswrapper[4811]: I1203 00:31:14.246986 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"b624c13e-3b23-45ef-9f45-93a6dc621cf0","Type":"ContainerStarted","Data":"c2fc5b178c586724d2edb8ddacf58829f023c5e9a279742de867ac5bc4827ac1"} Dec 03 00:31:15 crc kubenswrapper[4811]: I1203 00:31:15.897694 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=17.200136722 podStartE2EDuration="31.897678122s" podCreationTimestamp="2025-12-03 00:30:44 +0000 UTC" firstStartedPulling="2025-12-03 00:30:59.049321072 +0000 UTC m=+1499.191150544" lastFinishedPulling="2025-12-03 00:31:13.746862472 +0000 UTC m=+1513.888691944" observedRunningTime="2025-12-03 00:31:14.280338492 +0000 UTC m=+1514.422167964" watchObservedRunningTime="2025-12-03 00:31:15.897678122 +0000 UTC m=+1516.039507594" Dec 03 00:31:15 crc kubenswrapper[4811]: I1203 00:31:15.899654 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc"] Dec 03 00:31:15 crc kubenswrapper[4811]: I1203 00:31:15.900650 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" Dec 03 00:31:15 crc kubenswrapper[4811]: I1203 00:31:15.903762 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Dec 03 00:31:15 crc kubenswrapper[4811]: I1203 00:31:15.903769 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Dec 03 00:31:15 crc kubenswrapper[4811]: I1203 00:31:15.919520 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc"] Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.025158 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f59c9070-a92e-4e0b-8369-072b2fa631b8-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc\" (UID: \"f59c9070-a92e-4e0b-8369-072b2fa631b8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.025276 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fl8q\" (UniqueName: \"kubernetes.io/projected/f59c9070-a92e-4e0b-8369-072b2fa631b8-kube-api-access-5fl8q\") pod \"default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc\" (UID: \"f59c9070-a92e-4e0b-8369-072b2fa631b8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.025301 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f59c9070-a92e-4e0b-8369-072b2fa631b8-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc\" (UID: \"f59c9070-a92e-4e0b-8369-072b2fa631b8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.025319 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f59c9070-a92e-4e0b-8369-072b2fa631b8-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc\" (UID: \"f59c9070-a92e-4e0b-8369-072b2fa631b8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.126455 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f59c9070-a92e-4e0b-8369-072b2fa631b8-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc\" (UID: \"f59c9070-a92e-4e0b-8369-072b2fa631b8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.126873 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fl8q\" (UniqueName: \"kubernetes.io/projected/f59c9070-a92e-4e0b-8369-072b2fa631b8-kube-api-access-5fl8q\") pod \"default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc\" (UID: \"f59c9070-a92e-4e0b-8369-072b2fa631b8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.126904 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f59c9070-a92e-4e0b-8369-072b2fa631b8-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc\" (UID: \"f59c9070-a92e-4e0b-8369-072b2fa631b8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.126939 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f59c9070-a92e-4e0b-8369-072b2fa631b8-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc\" (UID: \"f59c9070-a92e-4e0b-8369-072b2fa631b8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.127937 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f59c9070-a92e-4e0b-8369-072b2fa631b8-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc\" (UID: \"f59c9070-a92e-4e0b-8369-072b2fa631b8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.130802 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f59c9070-a92e-4e0b-8369-072b2fa631b8-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc\" (UID: \"f59c9070-a92e-4e0b-8369-072b2fa631b8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.142239 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f59c9070-a92e-4e0b-8369-072b2fa631b8-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc\" (UID: \"f59c9070-a92e-4e0b-8369-072b2fa631b8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.149639 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fl8q\" (UniqueName: \"kubernetes.io/projected/f59c9070-a92e-4e0b-8369-072b2fa631b8-kube-api-access-5fl8q\") pod \"default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc\" (UID: \"f59c9070-a92e-4e0b-8369-072b2fa631b8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.219027 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.340064 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww"] Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.340977 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.343376 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.348671 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww"] Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.431227 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6a461187-6080-47bb-8d2b-9192a2075d14-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww\" (UID: \"6a461187-6080-47bb-8d2b-9192a2075d14\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.431381 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsxnv\" (UniqueName: \"kubernetes.io/projected/6a461187-6080-47bb-8d2b-9192a2075d14-kube-api-access-qsxnv\") pod \"default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww\" (UID: \"6a461187-6080-47bb-8d2b-9192a2075d14\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.431485 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6a461187-6080-47bb-8d2b-9192a2075d14-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww\" (UID: \"6a461187-6080-47bb-8d2b-9192a2075d14\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.431571 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/6a461187-6080-47bb-8d2b-9192a2075d14-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww\" (UID: \"6a461187-6080-47bb-8d2b-9192a2075d14\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.533276 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsxnv\" (UniqueName: \"kubernetes.io/projected/6a461187-6080-47bb-8d2b-9192a2075d14-kube-api-access-qsxnv\") pod \"default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww\" (UID: \"6a461187-6080-47bb-8d2b-9192a2075d14\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.533349 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6a461187-6080-47bb-8d2b-9192a2075d14-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww\" (UID: \"6a461187-6080-47bb-8d2b-9192a2075d14\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.533406 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/6a461187-6080-47bb-8d2b-9192a2075d14-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww\" (UID: \"6a461187-6080-47bb-8d2b-9192a2075d14\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.533509 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6a461187-6080-47bb-8d2b-9192a2075d14-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww\" (UID: \"6a461187-6080-47bb-8d2b-9192a2075d14\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.534121 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6a461187-6080-47bb-8d2b-9192a2075d14-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww\" (UID: \"6a461187-6080-47bb-8d2b-9192a2075d14\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.534494 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6a461187-6080-47bb-8d2b-9192a2075d14-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww\" (UID: \"6a461187-6080-47bb-8d2b-9192a2075d14\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.536999 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/6a461187-6080-47bb-8d2b-9192a2075d14-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww\" (UID: \"6a461187-6080-47bb-8d2b-9192a2075d14\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.556108 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsxnv\" (UniqueName: \"kubernetes.io/projected/6a461187-6080-47bb-8d2b-9192a2075d14-kube-api-access-qsxnv\") pod \"default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww\" (UID: \"6a461187-6080-47bb-8d2b-9192a2075d14\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" Dec 03 00:31:16 crc kubenswrapper[4811]: I1203 00:31:16.658228 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" Dec 03 00:31:17 crc kubenswrapper[4811]: I1203 00:31:17.766190 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc"] Dec 03 00:31:18 crc kubenswrapper[4811]: I1203 00:31:18.019072 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww"] Dec 03 00:31:18 crc kubenswrapper[4811]: W1203 00:31:18.034868 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a461187_6080_47bb_8d2b_9192a2075d14.slice/crio-5ba536846b1bc38bcce16433bf6131c534570b29c56f89e12cb4180da4521f5c WatchSource:0}: Error finding container 5ba536846b1bc38bcce16433bf6131c534570b29c56f89e12cb4180da4521f5c: Status 404 returned error can't find the container with id 5ba536846b1bc38bcce16433bf6131c534570b29c56f89e12cb4180da4521f5c Dec 03 00:31:18 crc kubenswrapper[4811]: I1203 00:31:18.289722 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" event={"ID":"f59c9070-a92e-4e0b-8369-072b2fa631b8","Type":"ContainerStarted","Data":"bded684c44d0911dee05030c4a5883b94e6ee1ee2da8db14cc2e5179bca80770"} Dec 03 00:31:18 crc kubenswrapper[4811]: I1203 00:31:18.289788 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" event={"ID":"f59c9070-a92e-4e0b-8369-072b2fa631b8","Type":"ContainerStarted","Data":"1a0be93306b0ec7b2f2a7ccbc470d111fbaf053ef97822c54415439131d77be0"} Dec 03 00:31:18 crc kubenswrapper[4811]: I1203 00:31:18.293117 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" event={"ID":"fbf98c09-5d18-49b5-9222-d3e42c6de766","Type":"ContainerStarted","Data":"30c6192b60521561c0bdb9c5fac3a2a44a49b787de1f84b5d1042206d81938a3"} Dec 03 00:31:18 crc kubenswrapper[4811]: I1203 00:31:18.294603 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" event={"ID":"6a461187-6080-47bb-8d2b-9192a2075d14","Type":"ContainerStarted","Data":"5ba536846b1bc38bcce16433bf6131c534570b29c56f89e12cb4180da4521f5c"} Dec 03 00:31:18 crc kubenswrapper[4811]: I1203 00:31:18.302191 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" event={"ID":"6f1318e7-8d9a-4324-9d53-4453fdd4d04e","Type":"ContainerStarted","Data":"8a19a0cb14956aaa71189618143187d2e694ded62ce5985a76453e097e125027"} Dec 03 00:31:18 crc kubenswrapper[4811]: I1203 00:31:18.304647 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" event={"ID":"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b","Type":"ContainerStarted","Data":"4005686106daa38f52dc3a377fc4b336b116d715e41b2b5d17a33ee3620a03dd"} Dec 03 00:31:18 crc kubenswrapper[4811]: I1203 00:31:18.843659 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Dec 03 00:31:18 crc kubenswrapper[4811]: E1203 00:31:18.849235 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/prometheus-default-0" podUID="e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e" Dec 03 00:31:18 crc kubenswrapper[4811]: I1203 00:31:18.905000 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Dec 03 00:31:19 crc kubenswrapper[4811]: I1203 00:31:19.321039 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" event={"ID":"6a461187-6080-47bb-8d2b-9192a2075d14","Type":"ContainerStarted","Data":"d9c028ee59a1f144667f5466cf196e2b23626c6ee3e54f202c7be00a3be00a74"} Dec 03 00:31:19 crc kubenswrapper[4811]: I1203 00:31:19.401900 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Dec 03 00:31:24 crc kubenswrapper[4811]: I1203 00:31:24.386498 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" event={"ID":"fbf98c09-5d18-49b5-9222-d3e42c6de766","Type":"ContainerStarted","Data":"0d479f018d1d5f33a1802edd8e5d752f8ffabe61cef41403aa1fe45ada0ceffb"} Dec 03 00:31:24 crc kubenswrapper[4811]: I1203 00:31:24.390078 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" event={"ID":"6a461187-6080-47bb-8d2b-9192a2075d14","Type":"ContainerStarted","Data":"4e54a00d7e8ae323b365c84302631e9e39ea7e65559a8c350e3a0ab6e868108f"} Dec 03 00:31:24 crc kubenswrapper[4811]: I1203 00:31:24.394548 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" event={"ID":"6f1318e7-8d9a-4324-9d53-4453fdd4d04e","Type":"ContainerStarted","Data":"56a9a0125b0dc3e735a3fc28754c1a114aba16f51bbe3614b2079b0f0dab8430"} Dec 03 00:31:24 crc kubenswrapper[4811]: I1203 00:31:24.397753 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e","Type":"ContainerStarted","Data":"6cfab5f65127863a920182202a425aa8bc7ffbee90e7a44f53ebfc398c59199e"} Dec 03 00:31:24 crc kubenswrapper[4811]: I1203 00:31:24.400689 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" event={"ID":"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b","Type":"ContainerStarted","Data":"5a57d5518c49ab47ca66fe51293f57c9536360749c22dbcb4a8741801bff059e"} Dec 03 00:31:24 crc kubenswrapper[4811]: I1203 00:31:24.403171 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" event={"ID":"f59c9070-a92e-4e0b-8369-072b2fa631b8","Type":"ContainerStarted","Data":"e9811c6d4595161d9dc48dd35bd5ea71431919a17006c774ad9719960f868e9f"} Dec 03 00:31:24 crc kubenswrapper[4811]: I1203 00:31:24.423211 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" podStartSLOduration=7.812407471 podStartE2EDuration="21.423189921s" podCreationTimestamp="2025-12-03 00:31:03 +0000 UTC" firstStartedPulling="2025-12-03 00:31:09.529117462 +0000 UTC m=+1509.670946934" lastFinishedPulling="2025-12-03 00:31:23.139899912 +0000 UTC m=+1523.281729384" observedRunningTime="2025-12-03 00:31:24.413573478 +0000 UTC m=+1524.555402970" watchObservedRunningTime="2025-12-03 00:31:24.423189921 +0000 UTC m=+1524.565019393" Dec 03 00:31:24 crc kubenswrapper[4811]: I1203 00:31:24.475425 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" podStartSLOduration=11.596155714 podStartE2EDuration="25.475395595s" podCreationTimestamp="2025-12-03 00:30:59 +0000 UTC" firstStartedPulling="2025-12-03 00:31:09.505046139 +0000 UTC m=+1509.646875611" lastFinishedPulling="2025-12-03 00:31:23.38428601 +0000 UTC m=+1523.526115492" observedRunningTime="2025-12-03 00:31:24.440729426 +0000 UTC m=+1524.582558918" watchObservedRunningTime="2025-12-03 00:31:24.475395595 +0000 UTC m=+1524.617225067" Dec 03 00:31:24 crc kubenswrapper[4811]: I1203 00:31:24.481928 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" podStartSLOduration=3.908212663 podStartE2EDuration="9.481904433s" podCreationTimestamp="2025-12-03 00:31:15 +0000 UTC" firstStartedPulling="2025-12-03 00:31:17.778462912 +0000 UTC m=+1517.920292384" lastFinishedPulling="2025-12-03 00:31:23.352154642 +0000 UTC m=+1523.493984154" observedRunningTime="2025-12-03 00:31:24.465499166 +0000 UTC m=+1524.607328648" watchObservedRunningTime="2025-12-03 00:31:24.481904433 +0000 UTC m=+1524.623733905" Dec 03 00:31:24 crc kubenswrapper[4811]: I1203 00:31:24.522033 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" podStartSLOduration=3.430042942 podStartE2EDuration="8.522010145s" podCreationTimestamp="2025-12-03 00:31:16 +0000 UTC" firstStartedPulling="2025-12-03 00:31:18.055674506 +0000 UTC m=+1518.197503978" lastFinishedPulling="2025-12-03 00:31:23.147641709 +0000 UTC m=+1523.289471181" observedRunningTime="2025-12-03 00:31:24.515826634 +0000 UTC m=+1524.657656116" watchObservedRunningTime="2025-12-03 00:31:24.522010145 +0000 UTC m=+1524.663839617" Dec 03 00:31:24 crc kubenswrapper[4811]: I1203 00:31:24.541918 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" podStartSLOduration=4.719204883 podStartE2EDuration="16.541900166s" podCreationTimestamp="2025-12-03 00:31:08 +0000 UTC" firstStartedPulling="2025-12-03 00:31:11.357873093 +0000 UTC m=+1511.499702565" lastFinishedPulling="2025-12-03 00:31:23.180568376 +0000 UTC m=+1523.322397848" observedRunningTime="2025-12-03 00:31:24.540434751 +0000 UTC m=+1524.682264223" watchObservedRunningTime="2025-12-03 00:31:24.541900166 +0000 UTC m=+1524.683729648" Dec 03 00:31:24 crc kubenswrapper[4811]: I1203 00:31:24.565775 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.967568216 podStartE2EDuration="54.565756654s" podCreationTimestamp="2025-12-03 00:30:30 +0000 UTC" firstStartedPulling="2025-12-03 00:30:34.293962062 +0000 UTC m=+1474.435791574" lastFinishedPulling="2025-12-03 00:31:23.89215054 +0000 UTC m=+1524.033980012" observedRunningTime="2025-12-03 00:31:24.563711074 +0000 UTC m=+1524.705540566" watchObservedRunningTime="2025-12-03 00:31:24.565756654 +0000 UTC m=+1524.707586126" Dec 03 00:31:25 crc kubenswrapper[4811]: I1203 00:31:25.849415 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sgc6x"] Dec 03 00:31:25 crc kubenswrapper[4811]: I1203 00:31:25.850949 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:25 crc kubenswrapper[4811]: I1203 00:31:25.860629 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sgc6x"] Dec 03 00:31:25 crc kubenswrapper[4811]: I1203 00:31:25.881534 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88ed48a4-3b42-48e8-ae22-35782b9c5c68-catalog-content\") pod \"community-operators-sgc6x\" (UID: \"88ed48a4-3b42-48e8-ae22-35782b9c5c68\") " pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:25 crc kubenswrapper[4811]: I1203 00:31:25.881607 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88ed48a4-3b42-48e8-ae22-35782b9c5c68-utilities\") pod \"community-operators-sgc6x\" (UID: \"88ed48a4-3b42-48e8-ae22-35782b9c5c68\") " pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:25 crc kubenswrapper[4811]: I1203 00:31:25.881634 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5vmh\" (UniqueName: \"kubernetes.io/projected/88ed48a4-3b42-48e8-ae22-35782b9c5c68-kube-api-access-l5vmh\") pod \"community-operators-sgc6x\" (UID: \"88ed48a4-3b42-48e8-ae22-35782b9c5c68\") " pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:25 crc kubenswrapper[4811]: I1203 00:31:25.983328 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88ed48a4-3b42-48e8-ae22-35782b9c5c68-catalog-content\") pod \"community-operators-sgc6x\" (UID: \"88ed48a4-3b42-48e8-ae22-35782b9c5c68\") " pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:25 crc kubenswrapper[4811]: I1203 00:31:25.983417 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88ed48a4-3b42-48e8-ae22-35782b9c5c68-utilities\") pod \"community-operators-sgc6x\" (UID: \"88ed48a4-3b42-48e8-ae22-35782b9c5c68\") " pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:25 crc kubenswrapper[4811]: I1203 00:31:25.983462 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5vmh\" (UniqueName: \"kubernetes.io/projected/88ed48a4-3b42-48e8-ae22-35782b9c5c68-kube-api-access-l5vmh\") pod \"community-operators-sgc6x\" (UID: \"88ed48a4-3b42-48e8-ae22-35782b9c5c68\") " pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:25 crc kubenswrapper[4811]: I1203 00:31:25.984168 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88ed48a4-3b42-48e8-ae22-35782b9c5c68-catalog-content\") pod \"community-operators-sgc6x\" (UID: \"88ed48a4-3b42-48e8-ae22-35782b9c5c68\") " pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:25 crc kubenswrapper[4811]: I1203 00:31:25.984194 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88ed48a4-3b42-48e8-ae22-35782b9c5c68-utilities\") pod \"community-operators-sgc6x\" (UID: \"88ed48a4-3b42-48e8-ae22-35782b9c5c68\") " pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:26 crc kubenswrapper[4811]: I1203 00:31:26.012381 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5vmh\" (UniqueName: \"kubernetes.io/projected/88ed48a4-3b42-48e8-ae22-35782b9c5c68-kube-api-access-l5vmh\") pod \"community-operators-sgc6x\" (UID: \"88ed48a4-3b42-48e8-ae22-35782b9c5c68\") " pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:26 crc kubenswrapper[4811]: I1203 00:31:26.174220 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:26 crc kubenswrapper[4811]: I1203 00:31:26.677434 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sgc6x"] Dec 03 00:31:27 crc kubenswrapper[4811]: I1203 00:31:27.432300 4811 generic.go:334] "Generic (PLEG): container finished" podID="88ed48a4-3b42-48e8-ae22-35782b9c5c68" containerID="3163e1f7e9f1fa1fc4aa0e12d116fddba14f8762f72ba268024d3019180a57c4" exitCode=0 Dec 03 00:31:27 crc kubenswrapper[4811]: I1203 00:31:27.432393 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgc6x" event={"ID":"88ed48a4-3b42-48e8-ae22-35782b9c5c68","Type":"ContainerDied","Data":"3163e1f7e9f1fa1fc4aa0e12d116fddba14f8762f72ba268024d3019180a57c4"} Dec 03 00:31:27 crc kubenswrapper[4811]: I1203 00:31:27.433467 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgc6x" event={"ID":"88ed48a4-3b42-48e8-ae22-35782b9c5c68","Type":"ContainerStarted","Data":"33191fac927ebe2174ea2edbcd921ce8c43e1061041313c72d9d0f124603e959"} Dec 03 00:31:28 crc kubenswrapper[4811]: I1203 00:31:28.441658 4811 generic.go:334] "Generic (PLEG): container finished" podID="88ed48a4-3b42-48e8-ae22-35782b9c5c68" containerID="cdc4f85d58696c15c3fc8e89f14e21f3d5e981ba40fc015673a93106d075940c" exitCode=0 Dec 03 00:31:28 crc kubenswrapper[4811]: I1203 00:31:28.441735 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgc6x" event={"ID":"88ed48a4-3b42-48e8-ae22-35782b9c5c68","Type":"ContainerDied","Data":"cdc4f85d58696c15c3fc8e89f14e21f3d5e981ba40fc015673a93106d075940c"} Dec 03 00:31:28 crc kubenswrapper[4811]: I1203 00:31:28.866659 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-tmpsp"] Dec 03 00:31:28 crc kubenswrapper[4811]: I1203 00:31:28.867060 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" podUID="949f6876-d78f-49fd-b12b-80d0be1ede0b" containerName="default-interconnect" containerID="cri-o://0e8cc345a9c59779ddf8e762f64534e18a15e30a2841ccec6631e243b94e244f" gracePeriod=30 Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.452130 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgc6x" event={"ID":"88ed48a4-3b42-48e8-ae22-35782b9c5c68","Type":"ContainerStarted","Data":"015c3bd103497b92e113455c71496be272266446e2d7f6cdef66054ba88b35a9"} Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.456934 4811 generic.go:334] "Generic (PLEG): container finished" podID="949f6876-d78f-49fd-b12b-80d0be1ede0b" containerID="0e8cc345a9c59779ddf8e762f64534e18a15e30a2841ccec6631e243b94e244f" exitCode=0 Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.456982 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" event={"ID":"949f6876-d78f-49fd-b12b-80d0be1ede0b","Type":"ContainerDied","Data":"0e8cc345a9c59779ddf8e762f64534e18a15e30a2841ccec6631e243b94e244f"} Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.459430 4811 generic.go:334] "Generic (PLEG): container finished" podID="f59c9070-a92e-4e0b-8369-072b2fa631b8" containerID="bded684c44d0911dee05030c4a5883b94e6ee1ee2da8db14cc2e5179bca80770" exitCode=0 Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.459464 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" event={"ID":"f59c9070-a92e-4e0b-8369-072b2fa631b8","Type":"ContainerDied","Data":"bded684c44d0911dee05030c4a5883b94e6ee1ee2da8db14cc2e5179bca80770"} Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.459988 4811 scope.go:117] "RemoveContainer" containerID="bded684c44d0911dee05030c4a5883b94e6ee1ee2da8db14cc2e5179bca80770" Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.462526 4811 generic.go:334] "Generic (PLEG): container finished" podID="fbf98c09-5d18-49b5-9222-d3e42c6de766" containerID="30c6192b60521561c0bdb9c5fac3a2a44a49b787de1f84b5d1042206d81938a3" exitCode=0 Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.462563 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" event={"ID":"fbf98c09-5d18-49b5-9222-d3e42c6de766","Type":"ContainerDied","Data":"30c6192b60521561c0bdb9c5fac3a2a44a49b787de1f84b5d1042206d81938a3"} Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.462825 4811 scope.go:117] "RemoveContainer" containerID="30c6192b60521561c0bdb9c5fac3a2a44a49b787de1f84b5d1042206d81938a3" Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.465134 4811 generic.go:334] "Generic (PLEG): container finished" podID="6f1318e7-8d9a-4324-9d53-4453fdd4d04e" containerID="8a19a0cb14956aaa71189618143187d2e694ded62ce5985a76453e097e125027" exitCode=0 Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.465156 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" event={"ID":"6f1318e7-8d9a-4324-9d53-4453fdd4d04e","Type":"ContainerDied","Data":"8a19a0cb14956aaa71189618143187d2e694ded62ce5985a76453e097e125027"} Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.465458 4811 scope.go:117] "RemoveContainer" containerID="8a19a0cb14956aaa71189618143187d2e694ded62ce5985a76453e097e125027" Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.472791 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sgc6x" podStartSLOduration=2.8469014599999998 podStartE2EDuration="4.472768927s" podCreationTimestamp="2025-12-03 00:31:25 +0000 UTC" firstStartedPulling="2025-12-03 00:31:27.434168284 +0000 UTC m=+1527.575997756" lastFinishedPulling="2025-12-03 00:31:29.060035761 +0000 UTC m=+1529.201865223" observedRunningTime="2025-12-03 00:31:29.47001498 +0000 UTC m=+1529.611844452" watchObservedRunningTime="2025-12-03 00:31:29.472768927 +0000 UTC m=+1529.614598399" Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.862737 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.910913 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-jxt59"] Dec 03 00:31:29 crc kubenswrapper[4811]: E1203 00:31:29.911185 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949f6876-d78f-49fd-b12b-80d0be1ede0b" containerName="default-interconnect" Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.911203 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="949f6876-d78f-49fd-b12b-80d0be1ede0b" containerName="default-interconnect" Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.911352 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="949f6876-d78f-49fd-b12b-80d0be1ede0b" containerName="default-interconnect" Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.911808 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:29 crc kubenswrapper[4811]: I1203 00:31:29.919976 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-jxt59"] Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.048059 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-inter-router-ca\") pod \"949f6876-d78f-49fd-b12b-80d0be1ede0b\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.048208 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-openstack-credentials\") pod \"949f6876-d78f-49fd-b12b-80d0be1ede0b\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.048238 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-sasl-users\") pod \"949f6876-d78f-49fd-b12b-80d0be1ede0b\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.049015 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76mqq\" (UniqueName: \"kubernetes.io/projected/949f6876-d78f-49fd-b12b-80d0be1ede0b-kube-api-access-76mqq\") pod \"949f6876-d78f-49fd-b12b-80d0be1ede0b\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.049049 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-inter-router-credentials\") pod \"949f6876-d78f-49fd-b12b-80d0be1ede0b\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.049091 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-openstack-ca\") pod \"949f6876-d78f-49fd-b12b-80d0be1ede0b\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.049123 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/949f6876-d78f-49fd-b12b-80d0be1ede0b-sasl-config\") pod \"949f6876-d78f-49fd-b12b-80d0be1ede0b\" (UID: \"949f6876-d78f-49fd-b12b-80d0be1ede0b\") " Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.049274 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-sasl-users\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.049549 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm8gm\" (UniqueName: \"kubernetes.io/projected/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-kube-api-access-hm8gm\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.049603 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.049673 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-sasl-config\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.049691 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.049714 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.049741 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.050274 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/949f6876-d78f-49fd-b12b-80d0be1ede0b-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "949f6876-d78f-49fd-b12b-80d0be1ede0b" (UID: "949f6876-d78f-49fd-b12b-80d0be1ede0b"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.054071 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "949f6876-d78f-49fd-b12b-80d0be1ede0b" (UID: "949f6876-d78f-49fd-b12b-80d0be1ede0b"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.054610 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "949f6876-d78f-49fd-b12b-80d0be1ede0b" (UID: "949f6876-d78f-49fd-b12b-80d0be1ede0b"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.057425 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "949f6876-d78f-49fd-b12b-80d0be1ede0b" (UID: "949f6876-d78f-49fd-b12b-80d0be1ede0b"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.057454 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949f6876-d78f-49fd-b12b-80d0be1ede0b-kube-api-access-76mqq" (OuterVolumeSpecName: "kube-api-access-76mqq") pod "949f6876-d78f-49fd-b12b-80d0be1ede0b" (UID: "949f6876-d78f-49fd-b12b-80d0be1ede0b"). InnerVolumeSpecName "kube-api-access-76mqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.057470 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "949f6876-d78f-49fd-b12b-80d0be1ede0b" (UID: "949f6876-d78f-49fd-b12b-80d0be1ede0b"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.066193 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "949f6876-d78f-49fd-b12b-80d0be1ede0b" (UID: "949f6876-d78f-49fd-b12b-80d0be1ede0b"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.151104 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-sasl-config\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.151154 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.151183 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.151208 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.151233 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm8gm\" (UniqueName: \"kubernetes.io/projected/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-kube-api-access-hm8gm\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.151251 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-sasl-users\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.151311 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.151369 4811 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.151379 4811 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-sasl-users\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.151388 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76mqq\" (UniqueName: \"kubernetes.io/projected/949f6876-d78f-49fd-b12b-80d0be1ede0b-kube-api-access-76mqq\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.151397 4811 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.151406 4811 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.151416 4811 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/949f6876-d78f-49fd-b12b-80d0be1ede0b-sasl-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.151424 4811 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/949f6876-d78f-49fd-b12b-80d0be1ede0b-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.152027 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-sasl-config\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.156454 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-sasl-users\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.156534 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.156544 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.164928 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.168903 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.169643 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm8gm\" (UniqueName: \"kubernetes.io/projected/24c610f8-3cf4-4af7-9b6d-38f017ea39a6-kube-api-access-hm8gm\") pod \"default-interconnect-68864d46cb-jxt59\" (UID: \"24c610f8-3cf4-4af7-9b6d-38f017ea39a6\") " pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.230542 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-jxt59" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.485406 4811 generic.go:334] "Generic (PLEG): container finished" podID="6a461187-6080-47bb-8d2b-9192a2075d14" containerID="d9c028ee59a1f144667f5466cf196e2b23626c6ee3e54f202c7be00a3be00a74" exitCode=0 Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.485480 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" event={"ID":"6a461187-6080-47bb-8d2b-9192a2075d14","Type":"ContainerDied","Data":"d9c028ee59a1f144667f5466cf196e2b23626c6ee3e54f202c7be00a3be00a74"} Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.486071 4811 scope.go:117] "RemoveContainer" containerID="d9c028ee59a1f144667f5466cf196e2b23626c6ee3e54f202c7be00a3be00a74" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.489071 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" event={"ID":"6f1318e7-8d9a-4324-9d53-4453fdd4d04e","Type":"ContainerStarted","Data":"0d4d1fe38ff262e6c38ef3ac139a7f64672b5986c2c7124c9846389eb93bd5a1"} Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.493456 4811 generic.go:334] "Generic (PLEG): container finished" podID="16c2ef3e-9973-4bf8-a5ca-e8857c7d478b" containerID="4005686106daa38f52dc3a377fc4b336b116d715e41b2b5d17a33ee3620a03dd" exitCode=0 Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.493530 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" event={"ID":"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b","Type":"ContainerDied","Data":"4005686106daa38f52dc3a377fc4b336b116d715e41b2b5d17a33ee3620a03dd"} Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.494114 4811 scope.go:117] "RemoveContainer" containerID="4005686106daa38f52dc3a377fc4b336b116d715e41b2b5d17a33ee3620a03dd" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.501810 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.502370 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-tmpsp" event={"ID":"949f6876-d78f-49fd-b12b-80d0be1ede0b","Type":"ContainerDied","Data":"1395c05a768201ec0ee7691ded7fe650e181e6df148137dc376e9397783d7c39"} Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.502400 4811 scope.go:117] "RemoveContainer" containerID="0e8cc345a9c59779ddf8e762f64534e18a15e30a2841ccec6631e243b94e244f" Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.522609 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" event={"ID":"f59c9070-a92e-4e0b-8369-072b2fa631b8","Type":"ContainerStarted","Data":"9c4c6db183e4d95366838ff0dcd824e381ddd03a5451beee55f3234b5e5b5255"} Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.546956 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" event={"ID":"fbf98c09-5d18-49b5-9222-d3e42c6de766","Type":"ContainerStarted","Data":"8caf6d8824a72c6bc1879cbddad37b12d26b1fb8bb28cb633915f20798252c89"} Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.619146 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-tmpsp"] Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.633075 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-tmpsp"] Dec 03 00:31:30 crc kubenswrapper[4811]: I1203 00:31:30.709645 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-jxt59"] Dec 03 00:31:31 crc kubenswrapper[4811]: E1203 00:31:31.058313 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f1318e7_8d9a_4324_9d53_4453fdd4d04e.slice/crio-0d4d1fe38ff262e6c38ef3ac139a7f64672b5986c2c7124c9846389eb93bd5a1.scope\": RecentStats: unable to find data in memory cache]" Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.553672 4811 generic.go:334] "Generic (PLEG): container finished" podID="fbf98c09-5d18-49b5-9222-d3e42c6de766" containerID="8caf6d8824a72c6bc1879cbddad37b12d26b1fb8bb28cb633915f20798252c89" exitCode=0 Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.553745 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" event={"ID":"fbf98c09-5d18-49b5-9222-d3e42c6de766","Type":"ContainerDied","Data":"8caf6d8824a72c6bc1879cbddad37b12d26b1fb8bb28cb633915f20798252c89"} Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.553778 4811 scope.go:117] "RemoveContainer" containerID="30c6192b60521561c0bdb9c5fac3a2a44a49b787de1f84b5d1042206d81938a3" Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.554361 4811 scope.go:117] "RemoveContainer" containerID="8caf6d8824a72c6bc1879cbddad37b12d26b1fb8bb28cb633915f20798252c89" Dec 03 00:31:31 crc kubenswrapper[4811]: E1203 00:31:31.554690 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb_service-telemetry(fbf98c09-5d18-49b5-9222-d3e42c6de766)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" podUID="fbf98c09-5d18-49b5-9222-d3e42c6de766" Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.556129 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" event={"ID":"6a461187-6080-47bb-8d2b-9192a2075d14","Type":"ContainerStarted","Data":"f2c87c6938f41327e74a61ab213816b21f644aec2df683e97298d681c9e3509f"} Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.559373 4811 generic.go:334] "Generic (PLEG): container finished" podID="6f1318e7-8d9a-4324-9d53-4453fdd4d04e" containerID="0d4d1fe38ff262e6c38ef3ac139a7f64672b5986c2c7124c9846389eb93bd5a1" exitCode=0 Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.559426 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" event={"ID":"6f1318e7-8d9a-4324-9d53-4453fdd4d04e","Type":"ContainerDied","Data":"0d4d1fe38ff262e6c38ef3ac139a7f64672b5986c2c7124c9846389eb93bd5a1"} Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.559849 4811 scope.go:117] "RemoveContainer" containerID="0d4d1fe38ff262e6c38ef3ac139a7f64672b5986c2c7124c9846389eb93bd5a1" Dec 03 00:31:31 crc kubenswrapper[4811]: E1203 00:31:31.560103 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g_service-telemetry(6f1318e7-8d9a-4324-9d53-4453fdd4d04e)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" podUID="6f1318e7-8d9a-4324-9d53-4453fdd4d04e" Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.564728 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" event={"ID":"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b","Type":"ContainerStarted","Data":"335b1ce7573a0e5a96cbbf784100a2b7c9427ebcfe6ca0e291b9b357464832aa"} Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.593890 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-jxt59" event={"ID":"24c610f8-3cf4-4af7-9b6d-38f017ea39a6","Type":"ContainerStarted","Data":"7a93c32f9c5f6eb9329df1b430df3cdbcdd2f45cc39da539a3c42a08c8bd6607"} Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.593940 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-jxt59" event={"ID":"24c610f8-3cf4-4af7-9b6d-38f017ea39a6","Type":"ContainerStarted","Data":"290786ea6ed7a2b100111309af997478cfe2879829203089f3cf89a6ea972bf9"} Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.601524 4811 generic.go:334] "Generic (PLEG): container finished" podID="f59c9070-a92e-4e0b-8369-072b2fa631b8" containerID="9c4c6db183e4d95366838ff0dcd824e381ddd03a5451beee55f3234b5e5b5255" exitCode=0 Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.601587 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" event={"ID":"f59c9070-a92e-4e0b-8369-072b2fa631b8","Type":"ContainerDied","Data":"9c4c6db183e4d95366838ff0dcd824e381ddd03a5451beee55f3234b5e5b5255"} Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.602186 4811 scope.go:117] "RemoveContainer" containerID="9c4c6db183e4d95366838ff0dcd824e381ddd03a5451beee55f3234b5e5b5255" Dec 03 00:31:31 crc kubenswrapper[4811]: E1203 00:31:31.602547 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc_service-telemetry(f59c9070-a92e-4e0b-8369-072b2fa631b8)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" podUID="f59c9070-a92e-4e0b-8369-072b2fa631b8" Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.618211 4811 scope.go:117] "RemoveContainer" containerID="8a19a0cb14956aaa71189618143187d2e694ded62ce5985a76453e097e125027" Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.665133 4811 scope.go:117] "RemoveContainer" containerID="bded684c44d0911dee05030c4a5883b94e6ee1ee2da8db14cc2e5179bca80770" Dec 03 00:31:31 crc kubenswrapper[4811]: I1203 00:31:31.732700 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-jxt59" podStartSLOduration=3.73268055 podStartE2EDuration="3.73268055s" podCreationTimestamp="2025-12-03 00:31:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:31:31.695393756 +0000 UTC m=+1531.837223228" watchObservedRunningTime="2025-12-03 00:31:31.73268055 +0000 UTC m=+1531.874510022" Dec 03 00:31:32 crc kubenswrapper[4811]: I1203 00:31:32.122855 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949f6876-d78f-49fd-b12b-80d0be1ede0b" path="/var/lib/kubelet/pods/949f6876-d78f-49fd-b12b-80d0be1ede0b/volumes" Dec 03 00:31:32 crc kubenswrapper[4811]: I1203 00:31:32.609499 4811 generic.go:334] "Generic (PLEG): container finished" podID="16c2ef3e-9973-4bf8-a5ca-e8857c7d478b" containerID="335b1ce7573a0e5a96cbbf784100a2b7c9427ebcfe6ca0e291b9b357464832aa" exitCode=0 Dec 03 00:31:32 crc kubenswrapper[4811]: I1203 00:31:32.609566 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" event={"ID":"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b","Type":"ContainerDied","Data":"335b1ce7573a0e5a96cbbf784100a2b7c9427ebcfe6ca0e291b9b357464832aa"} Dec 03 00:31:32 crc kubenswrapper[4811]: I1203 00:31:32.609599 4811 scope.go:117] "RemoveContainer" containerID="4005686106daa38f52dc3a377fc4b336b116d715e41b2b5d17a33ee3620a03dd" Dec 03 00:31:32 crc kubenswrapper[4811]: I1203 00:31:32.609957 4811 scope.go:117] "RemoveContainer" containerID="335b1ce7573a0e5a96cbbf784100a2b7c9427ebcfe6ca0e291b9b357464832aa" Dec 03 00:31:32 crc kubenswrapper[4811]: E1203 00:31:32.610125 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6_service-telemetry(16c2ef3e-9973-4bf8-a5ca-e8857c7d478b)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" podUID="16c2ef3e-9973-4bf8-a5ca-e8857c7d478b" Dec 03 00:31:32 crc kubenswrapper[4811]: I1203 00:31:32.618675 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" event={"ID":"6a461187-6080-47bb-8d2b-9192a2075d14","Type":"ContainerDied","Data":"f2c87c6938f41327e74a61ab213816b21f644aec2df683e97298d681c9e3509f"} Dec 03 00:31:32 crc kubenswrapper[4811]: I1203 00:31:32.619416 4811 scope.go:117] "RemoveContainer" containerID="f2c87c6938f41327e74a61ab213816b21f644aec2df683e97298d681c9e3509f" Dec 03 00:31:32 crc kubenswrapper[4811]: E1203 00:31:32.619832 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww_service-telemetry(6a461187-6080-47bb-8d2b-9192a2075d14)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" podUID="6a461187-6080-47bb-8d2b-9192a2075d14" Dec 03 00:31:32 crc kubenswrapper[4811]: I1203 00:31:32.618619 4811 generic.go:334] "Generic (PLEG): container finished" podID="6a461187-6080-47bb-8d2b-9192a2075d14" containerID="f2c87c6938f41327e74a61ab213816b21f644aec2df683e97298d681c9e3509f" exitCode=0 Dec 03 00:31:32 crc kubenswrapper[4811]: I1203 00:31:32.709638 4811 scope.go:117] "RemoveContainer" containerID="d9c028ee59a1f144667f5466cf196e2b23626c6ee3e54f202c7be00a3be00a74" Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.037956 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.039064 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.041662 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.058866 4811 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.066853 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.207544 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d\") " pod="service-telemetry/qdr-test" Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.207648 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4vww\" (UniqueName: \"kubernetes.io/projected/9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d-kube-api-access-r4vww\") pod \"qdr-test\" (UID: \"9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d\") " pod="service-telemetry/qdr-test" Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.207709 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d-qdr-test-config\") pod \"qdr-test\" (UID: \"9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d\") " pod="service-telemetry/qdr-test" Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.309135 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d\") " pod="service-telemetry/qdr-test" Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.309475 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4vww\" (UniqueName: \"kubernetes.io/projected/9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d-kube-api-access-r4vww\") pod \"qdr-test\" (UID: \"9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d\") " pod="service-telemetry/qdr-test" Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.309574 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d-qdr-test-config\") pod \"qdr-test\" (UID: \"9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d\") " pod="service-telemetry/qdr-test" Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.310577 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d-qdr-test-config\") pod \"qdr-test\" (UID: \"9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d\") " pod="service-telemetry/qdr-test" Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.322866 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d\") " pod="service-telemetry/qdr-test" Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.326937 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4vww\" (UniqueName: \"kubernetes.io/projected/9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d-kube-api-access-r4vww\") pod \"qdr-test\" (UID: \"9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d\") " pod="service-telemetry/qdr-test" Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.423089 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 03 00:31:35 crc kubenswrapper[4811]: I1203 00:31:35.873752 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 03 00:31:35 crc kubenswrapper[4811]: W1203 00:31:35.875213 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d70fe6a_ce2b_40a8_ba60_3daeab8bcb1d.slice/crio-895045fa598feb4e81c44cdc8448a941f74e14d6ef73a9c95557cf49cf25f6d6 WatchSource:0}: Error finding container 895045fa598feb4e81c44cdc8448a941f74e14d6ef73a9c95557cf49cf25f6d6: Status 404 returned error can't find the container with id 895045fa598feb4e81c44cdc8448a941f74e14d6ef73a9c95557cf49cf25f6d6 Dec 03 00:31:36 crc kubenswrapper[4811]: I1203 00:31:36.174968 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:36 crc kubenswrapper[4811]: I1203 00:31:36.175021 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:36 crc kubenswrapper[4811]: I1203 00:31:36.219559 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:36 crc kubenswrapper[4811]: I1203 00:31:36.661686 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d","Type":"ContainerStarted","Data":"895045fa598feb4e81c44cdc8448a941f74e14d6ef73a9c95557cf49cf25f6d6"} Dec 03 00:31:36 crc kubenswrapper[4811]: I1203 00:31:36.702983 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:37 crc kubenswrapper[4811]: I1203 00:31:37.443713 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sgc6x"] Dec 03 00:31:38 crc kubenswrapper[4811]: I1203 00:31:38.683051 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sgc6x" podUID="88ed48a4-3b42-48e8-ae22-35782b9c5c68" containerName="registry-server" containerID="cri-o://015c3bd103497b92e113455c71496be272266446e2d7f6cdef66054ba88b35a9" gracePeriod=2 Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.186543 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.380868 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88ed48a4-3b42-48e8-ae22-35782b9c5c68-utilities\") pod \"88ed48a4-3b42-48e8-ae22-35782b9c5c68\" (UID: \"88ed48a4-3b42-48e8-ae22-35782b9c5c68\") " Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.381644 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88ed48a4-3b42-48e8-ae22-35782b9c5c68-catalog-content\") pod \"88ed48a4-3b42-48e8-ae22-35782b9c5c68\" (UID: \"88ed48a4-3b42-48e8-ae22-35782b9c5c68\") " Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.381737 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5vmh\" (UniqueName: \"kubernetes.io/projected/88ed48a4-3b42-48e8-ae22-35782b9c5c68-kube-api-access-l5vmh\") pod \"88ed48a4-3b42-48e8-ae22-35782b9c5c68\" (UID: \"88ed48a4-3b42-48e8-ae22-35782b9c5c68\") " Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.381585 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88ed48a4-3b42-48e8-ae22-35782b9c5c68-utilities" (OuterVolumeSpecName: "utilities") pod "88ed48a4-3b42-48e8-ae22-35782b9c5c68" (UID: "88ed48a4-3b42-48e8-ae22-35782b9c5c68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.396458 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88ed48a4-3b42-48e8-ae22-35782b9c5c68-kube-api-access-l5vmh" (OuterVolumeSpecName: "kube-api-access-l5vmh") pod "88ed48a4-3b42-48e8-ae22-35782b9c5c68" (UID: "88ed48a4-3b42-48e8-ae22-35782b9c5c68"). InnerVolumeSpecName "kube-api-access-l5vmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.404488 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5vmh\" (UniqueName: \"kubernetes.io/projected/88ed48a4-3b42-48e8-ae22-35782b9c5c68-kube-api-access-l5vmh\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.404526 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88ed48a4-3b42-48e8-ae22-35782b9c5c68-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.690838 4811 generic.go:334] "Generic (PLEG): container finished" podID="88ed48a4-3b42-48e8-ae22-35782b9c5c68" containerID="015c3bd103497b92e113455c71496be272266446e2d7f6cdef66054ba88b35a9" exitCode=0 Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.690885 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgc6x" event={"ID":"88ed48a4-3b42-48e8-ae22-35782b9c5c68","Type":"ContainerDied","Data":"015c3bd103497b92e113455c71496be272266446e2d7f6cdef66054ba88b35a9"} Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.690916 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgc6x" event={"ID":"88ed48a4-3b42-48e8-ae22-35782b9c5c68","Type":"ContainerDied","Data":"33191fac927ebe2174ea2edbcd921ce8c43e1061041313c72d9d0f124603e959"} Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.690935 4811 scope.go:117] "RemoveContainer" containerID="015c3bd103497b92e113455c71496be272266446e2d7f6cdef66054ba88b35a9" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.691078 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgc6x" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.708550 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88ed48a4-3b42-48e8-ae22-35782b9c5c68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88ed48a4-3b42-48e8-ae22-35782b9c5c68" (UID: "88ed48a4-3b42-48e8-ae22-35782b9c5c68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.721844 4811 scope.go:117] "RemoveContainer" containerID="cdc4f85d58696c15c3fc8e89f14e21f3d5e981ba40fc015673a93106d075940c" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.741659 4811 scope.go:117] "RemoveContainer" containerID="3163e1f7e9f1fa1fc4aa0e12d116fddba14f8762f72ba268024d3019180a57c4" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.763422 4811 scope.go:117] "RemoveContainer" containerID="015c3bd103497b92e113455c71496be272266446e2d7f6cdef66054ba88b35a9" Dec 03 00:31:39 crc kubenswrapper[4811]: E1203 00:31:39.764009 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"015c3bd103497b92e113455c71496be272266446e2d7f6cdef66054ba88b35a9\": container with ID starting with 015c3bd103497b92e113455c71496be272266446e2d7f6cdef66054ba88b35a9 not found: ID does not exist" containerID="015c3bd103497b92e113455c71496be272266446e2d7f6cdef66054ba88b35a9" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.764038 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"015c3bd103497b92e113455c71496be272266446e2d7f6cdef66054ba88b35a9"} err="failed to get container status \"015c3bd103497b92e113455c71496be272266446e2d7f6cdef66054ba88b35a9\": rpc error: code = NotFound desc = could not find container \"015c3bd103497b92e113455c71496be272266446e2d7f6cdef66054ba88b35a9\": container with ID starting with 015c3bd103497b92e113455c71496be272266446e2d7f6cdef66054ba88b35a9 not found: ID does not exist" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.764058 4811 scope.go:117] "RemoveContainer" containerID="cdc4f85d58696c15c3fc8e89f14e21f3d5e981ba40fc015673a93106d075940c" Dec 03 00:31:39 crc kubenswrapper[4811]: E1203 00:31:39.764559 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdc4f85d58696c15c3fc8e89f14e21f3d5e981ba40fc015673a93106d075940c\": container with ID starting with cdc4f85d58696c15c3fc8e89f14e21f3d5e981ba40fc015673a93106d075940c not found: ID does not exist" containerID="cdc4f85d58696c15c3fc8e89f14e21f3d5e981ba40fc015673a93106d075940c" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.764608 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdc4f85d58696c15c3fc8e89f14e21f3d5e981ba40fc015673a93106d075940c"} err="failed to get container status \"cdc4f85d58696c15c3fc8e89f14e21f3d5e981ba40fc015673a93106d075940c\": rpc error: code = NotFound desc = could not find container \"cdc4f85d58696c15c3fc8e89f14e21f3d5e981ba40fc015673a93106d075940c\": container with ID starting with cdc4f85d58696c15c3fc8e89f14e21f3d5e981ba40fc015673a93106d075940c not found: ID does not exist" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.764642 4811 scope.go:117] "RemoveContainer" containerID="3163e1f7e9f1fa1fc4aa0e12d116fddba14f8762f72ba268024d3019180a57c4" Dec 03 00:31:39 crc kubenswrapper[4811]: E1203 00:31:39.764981 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3163e1f7e9f1fa1fc4aa0e12d116fddba14f8762f72ba268024d3019180a57c4\": container with ID starting with 3163e1f7e9f1fa1fc4aa0e12d116fddba14f8762f72ba268024d3019180a57c4 not found: ID does not exist" containerID="3163e1f7e9f1fa1fc4aa0e12d116fddba14f8762f72ba268024d3019180a57c4" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.765006 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3163e1f7e9f1fa1fc4aa0e12d116fddba14f8762f72ba268024d3019180a57c4"} err="failed to get container status \"3163e1f7e9f1fa1fc4aa0e12d116fddba14f8762f72ba268024d3019180a57c4\": rpc error: code = NotFound desc = could not find container \"3163e1f7e9f1fa1fc4aa0e12d116fddba14f8762f72ba268024d3019180a57c4\": container with ID starting with 3163e1f7e9f1fa1fc4aa0e12d116fddba14f8762f72ba268024d3019180a57c4 not found: ID does not exist" Dec 03 00:31:39 crc kubenswrapper[4811]: I1203 00:31:39.809432 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88ed48a4-3b42-48e8-ae22-35782b9c5c68-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:40 crc kubenswrapper[4811]: I1203 00:31:40.017911 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sgc6x"] Dec 03 00:31:40 crc kubenswrapper[4811]: I1203 00:31:40.022893 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sgc6x"] Dec 03 00:31:40 crc kubenswrapper[4811]: I1203 00:31:40.122116 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88ed48a4-3b42-48e8-ae22-35782b9c5c68" path="/var/lib/kubelet/pods/88ed48a4-3b42-48e8-ae22-35782b9c5c68/volumes" Dec 03 00:31:42 crc kubenswrapper[4811]: I1203 00:31:42.117416 4811 scope.go:117] "RemoveContainer" containerID="0d4d1fe38ff262e6c38ef3ac139a7f64672b5986c2c7124c9846389eb93bd5a1" Dec 03 00:31:43 crc kubenswrapper[4811]: I1203 00:31:43.115139 4811 scope.go:117] "RemoveContainer" containerID="f2c87c6938f41327e74a61ab213816b21f644aec2df683e97298d681c9e3509f" Dec 03 00:31:43 crc kubenswrapper[4811]: I1203 00:31:43.115249 4811 scope.go:117] "RemoveContainer" containerID="8caf6d8824a72c6bc1879cbddad37b12d26b1fb8bb28cb633915f20798252c89" Dec 03 00:31:43 crc kubenswrapper[4811]: I1203 00:31:43.115791 4811 scope.go:117] "RemoveContainer" containerID="9c4c6db183e4d95366838ff0dcd824e381ddd03a5451beee55f3234b5e5b5255" Dec 03 00:31:46 crc kubenswrapper[4811]: I1203 00:31:46.115306 4811 scope.go:117] "RemoveContainer" containerID="335b1ce7573a0e5a96cbbf784100a2b7c9427ebcfe6ca0e291b9b357464832aa" Dec 03 00:31:46 crc kubenswrapper[4811]: I1203 00:31:46.756446 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww" event={"ID":"6a461187-6080-47bb-8d2b-9192a2075d14","Type":"ContainerStarted","Data":"66619f7538e07c52b4e999af68fb5e6ce659efaaa85f87e74573a201f2762b2f"} Dec 03 00:31:46 crc kubenswrapper[4811]: I1203 00:31:46.761183 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g" event={"ID":"6f1318e7-8d9a-4324-9d53-4453fdd4d04e","Type":"ContainerStarted","Data":"53ea5e69982770bddd5f5e8d479cb57e24fe80c156e1f41d50d0f523e45ff022"} Dec 03 00:31:46 crc kubenswrapper[4811]: I1203 00:31:46.764367 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6" event={"ID":"16c2ef3e-9973-4bf8-a5ca-e8857c7d478b","Type":"ContainerStarted","Data":"8d499f2c6d382c28a7ea1cc715ccfd0b89da6d746d989ff80729d0cc22a8596e"} Dec 03 00:31:46 crc kubenswrapper[4811]: I1203 00:31:46.766729 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d","Type":"ContainerStarted","Data":"3cb8eadcff3eab3abcdc2df592064670fd3482275d37751d6d9828b0448d3ea2"} Dec 03 00:31:46 crc kubenswrapper[4811]: I1203 00:31:46.769283 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc" event={"ID":"f59c9070-a92e-4e0b-8369-072b2fa631b8","Type":"ContainerStarted","Data":"4943dab0d3358eff135895edc094f8265a294423df0f7f9d13b142bfbee7199a"} Dec 03 00:31:46 crc kubenswrapper[4811]: I1203 00:31:46.772758 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb" event={"ID":"fbf98c09-5d18-49b5-9222-d3e42c6de766","Type":"ContainerStarted","Data":"34da4ba4d46f10e6edbfb895f736aeb87f45740ade290a47a478a0709dbee828"} Dec 03 00:31:46 crc kubenswrapper[4811]: I1203 00:31:46.784686 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.438002677 podStartE2EDuration="11.784665332s" podCreationTimestamp="2025-12-03 00:31:35 +0000 UTC" firstStartedPulling="2025-12-03 00:31:35.877720738 +0000 UTC m=+1536.019550210" lastFinishedPulling="2025-12-03 00:31:46.224383393 +0000 UTC m=+1546.366212865" observedRunningTime="2025-12-03 00:31:46.782116961 +0000 UTC m=+1546.923946433" watchObservedRunningTime="2025-12-03 00:31:46.784665332 +0000 UTC m=+1546.926494804" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.107038 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-kj6l5"] Dec 03 00:31:47 crc kubenswrapper[4811]: E1203 00:31:47.107297 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ed48a4-3b42-48e8-ae22-35782b9c5c68" containerName="extract-content" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.107313 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ed48a4-3b42-48e8-ae22-35782b9c5c68" containerName="extract-content" Dec 03 00:31:47 crc kubenswrapper[4811]: E1203 00:31:47.107336 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ed48a4-3b42-48e8-ae22-35782b9c5c68" containerName="extract-utilities" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.107343 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ed48a4-3b42-48e8-ae22-35782b9c5c68" containerName="extract-utilities" Dec 03 00:31:47 crc kubenswrapper[4811]: E1203 00:31:47.107352 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ed48a4-3b42-48e8-ae22-35782b9c5c68" containerName="registry-server" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.107360 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ed48a4-3b42-48e8-ae22-35782b9c5c68" containerName="registry-server" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.107463 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ed48a4-3b42-48e8-ae22-35782b9c5c68" containerName="registry-server" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.108051 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.109841 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.109850 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.110754 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.110814 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.110959 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.113493 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.119568 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-kj6l5"] Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.150846 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.150952 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rwjt\" (UniqueName: \"kubernetes.io/projected/95415505-aeec-4c5e-a838-4480230358f9-kube-api-access-9rwjt\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.150992 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-sensubility-config\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.151053 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-ceilometer-publisher\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.151084 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-collectd-config\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.151153 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.151174 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-healthcheck-log\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.252607 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-ceilometer-publisher\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.252658 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-collectd-config\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.252688 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.252715 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-healthcheck-log\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.252760 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.252779 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rwjt\" (UniqueName: \"kubernetes.io/projected/95415505-aeec-4c5e-a838-4480230358f9-kube-api-access-9rwjt\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.252796 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-sensubility-config\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.253661 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-sensubility-config\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.254039 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.254226 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-healthcheck-log\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.254502 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-ceilometer-publisher\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.254848 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.255082 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-collectd-config\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.278108 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rwjt\" (UniqueName: \"kubernetes.io/projected/95415505-aeec-4c5e-a838-4480230358f9-kube-api-access-9rwjt\") pod \"stf-smoketest-smoke1-kj6l5\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.421221 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.581743 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.584860 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.590941 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.662937 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8svv\" (UniqueName: \"kubernetes.io/projected/e1091034-7173-47b5-a5f5-bf052622dd72-kube-api-access-j8svv\") pod \"curl\" (UID: \"e1091034-7173-47b5-a5f5-bf052622dd72\") " pod="service-telemetry/curl" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.764641 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8svv\" (UniqueName: \"kubernetes.io/projected/e1091034-7173-47b5-a5f5-bf052622dd72-kube-api-access-j8svv\") pod \"curl\" (UID: \"e1091034-7173-47b5-a5f5-bf052622dd72\") " pod="service-telemetry/curl" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.785878 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8svv\" (UniqueName: \"kubernetes.io/projected/e1091034-7173-47b5-a5f5-bf052622dd72-kube-api-access-j8svv\") pod \"curl\" (UID: \"e1091034-7173-47b5-a5f5-bf052622dd72\") " pod="service-telemetry/curl" Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.888703 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-kj6l5"] Dec 03 00:31:47 crc kubenswrapper[4811]: I1203 00:31:47.931927 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 03 00:31:48 crc kubenswrapper[4811]: I1203 00:31:48.340344 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 03 00:31:48 crc kubenswrapper[4811]: I1203 00:31:48.792595 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"e1091034-7173-47b5-a5f5-bf052622dd72","Type":"ContainerStarted","Data":"4c1ad4333754de5324f8ddd5a6e8e2ee01644a844ef09ca41cfaeb1d339188d1"} Dec 03 00:31:48 crc kubenswrapper[4811]: I1203 00:31:48.794242 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-kj6l5" event={"ID":"95415505-aeec-4c5e-a838-4480230358f9","Type":"ContainerStarted","Data":"17829b71396d63fc2363210b564b6300cd52f6a879f0f13edae4ad4a019d2fe3"} Dec 03 00:31:53 crc kubenswrapper[4811]: I1203 00:31:53.831804 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1091034-7173-47b5-a5f5-bf052622dd72" containerID="3216d19bcbd5c294032d4dc38c7abba7545530d49a07ad96b0e053ed05e9c228" exitCode=0 Dec 03 00:31:53 crc kubenswrapper[4811]: I1203 00:31:53.832155 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"e1091034-7173-47b5-a5f5-bf052622dd72","Type":"ContainerDied","Data":"3216d19bcbd5c294032d4dc38c7abba7545530d49a07ad96b0e053ed05e9c228"} Dec 03 00:31:55 crc kubenswrapper[4811]: I1203 00:31:55.104929 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 03 00:31:55 crc kubenswrapper[4811]: I1203 00:31:55.196596 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8svv\" (UniqueName: \"kubernetes.io/projected/e1091034-7173-47b5-a5f5-bf052622dd72-kube-api-access-j8svv\") pod \"e1091034-7173-47b5-a5f5-bf052622dd72\" (UID: \"e1091034-7173-47b5-a5f5-bf052622dd72\") " Dec 03 00:31:55 crc kubenswrapper[4811]: I1203 00:31:55.213934 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1091034-7173-47b5-a5f5-bf052622dd72-kube-api-access-j8svv" (OuterVolumeSpecName: "kube-api-access-j8svv") pod "e1091034-7173-47b5-a5f5-bf052622dd72" (UID: "e1091034-7173-47b5-a5f5-bf052622dd72"). InnerVolumeSpecName "kube-api-access-j8svv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:31:55 crc kubenswrapper[4811]: I1203 00:31:55.300858 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8svv\" (UniqueName: \"kubernetes.io/projected/e1091034-7173-47b5-a5f5-bf052622dd72-kube-api-access-j8svv\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:55 crc kubenswrapper[4811]: I1203 00:31:55.311864 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_e1091034-7173-47b5-a5f5-bf052622dd72/curl/0.log" Dec 03 00:31:55 crc kubenswrapper[4811]: I1203 00:31:55.614732 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-dgvqc_13eccb10-6229-4374-a790-e8677f1438dd/prometheus-webhook-snmp/0.log" Dec 03 00:31:55 crc kubenswrapper[4811]: I1203 00:31:55.847646 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"e1091034-7173-47b5-a5f5-bf052622dd72","Type":"ContainerDied","Data":"4c1ad4333754de5324f8ddd5a6e8e2ee01644a844ef09ca41cfaeb1d339188d1"} Dec 03 00:31:55 crc kubenswrapper[4811]: I1203 00:31:55.847685 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c1ad4333754de5324f8ddd5a6e8e2ee01644a844ef09ca41cfaeb1d339188d1" Dec 03 00:31:55 crc kubenswrapper[4811]: I1203 00:31:55.847713 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 03 00:32:04 crc kubenswrapper[4811]: I1203 00:32:04.923756 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-kj6l5" event={"ID":"95415505-aeec-4c5e-a838-4480230358f9","Type":"ContainerStarted","Data":"d91a6b54f4c7ffbfc5295a7f4fddd493165b1913f0b6b7ce4c5423d75c31eb5e"} Dec 03 00:32:16 crc kubenswrapper[4811]: E1203 00:32:16.248975 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/tripleomastercentos9/openstack-ceilometer-notification:current-tripleo" Dec 03 00:32:16 crc kubenswrapper[4811]: E1203 00:32:16.249648 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:smoketest-ceilometer,Image:quay.io/tripleomastercentos9/openstack-ceilometer-notification:current-tripleo,Command:[/smoketest_ceilometer_entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CLOUDNAME,Value:smoke1,ValueFrom:nil,},EnvVar{Name:ELASTICSEARCH_AUTH_PASS,Value:an1jEjixVQFu36gHYXu48XMa,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_AUTH_TOKEN,Value:eyJhbGciOiJSUzI1NiIsImtpZCI6InF6SnFxNFFjbVk5VmJQZ2dNMmUxdHFmTlJlVWx4UDhSTlhIamV3RUx4WU0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTc2NDcyNTQ5MCwiaWF0IjoxNzY0NzIxODkwLCJpc3MiOiJodHRwczovL2t1YmVybmV0ZXMuZGVmYXVsdC5zdmMiLCJqdGkiOiI0NTI3YjBkZS1iYTIwLTRjMTQtOTIwOS1iMGVlZWM1MWQzNDQiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6InNlcnZpY2UtdGVsZW1ldHJ5Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6InN0Zi1wcm9tZXRoZXVzLXJlYWRlciIsInVpZCI6IjA4NDUwOTVjLTIxNTItNGQ1OC04MzUzLTdlNTg1OTJmM2JjZiJ9fSwibmJmIjoxNzY0NzIxODkwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c2VydmljZS10ZWxlbWV0cnk6c3RmLXByb21ldGhldXMtcmVhZGVyIn0.VdwHDpmSCRAmzefU6Wcl_Q8lNb9D38V0IiiYIHHSokqfWO6nXeXgkpJ9LcdJ0YAB1sjLE9qCChi6-Yhkr7uxtS4xbGfyEQELNwVejmEzpiTAWwv7j5_O-4-icNOQQzlWgtGopZFAx49dGFVGTO4eW29RtF1aLIVQGYChb7AD-h_kRUD6YSQ7xmkeIqhzzNjrIm5epHKPaW-VtfnnQpplIhSzT47WXqJmGo5MI3rqbhUTWeBp2rcAAv0IQqm4VpSc90E8JdYaLWyaiSixF7G2L3Dpx5RTCEt18USKO0CJ0F2wLUzTsqsH7UdS7GaqIKsSS7GSp-h0W6SeIl3RT80wKVaa2LVyFq2hZ3I_P5kiCwSNUtwoQh-p8nKcnJZ8AaF_6vul4eDr_n_otUL_ePFo4zZQ2cACRoeh1wJoVmcv_euNNis_Bc_GjqwcXvFx2S450L2V8YF0HhUotRvuo3NR3687mPVsZrtQfn-m4xDvm3YGrfLfRuFJ7-rRW1Ir7ZFLywhB4r65Vu9JJ4vuoJvpwmPdkgybHxiB1K6UDqZXySc_T8kYN_EU7eoFi-kttv64rZ4sLyDZlUr-3jXNpHwwlOq4volF9x71PIyNRJ1hqhwqJf-rI07SByvoS4r_VBn6yO9M5nvk3WeBTjLqMT5jTzE1HpUfTsOrUPXMUoqYA6U,ValueFrom:nil,},EnvVar{Name:OBSERVABILITY_STRATEGY,Value:<>,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ceilometer-publisher,ReadOnly:false,MountPath:/ceilometer_publish.py,SubPath:ceilometer_publish.py,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceilometer-entrypoint-script,ReadOnly:false,MountPath:/smoketest_ceilometer_entrypoint.sh,SubPath:smoketest_ceilometer_entrypoint.sh,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rwjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod stf-smoketest-smoke1-kj6l5_service-telemetry(95415505-aeec-4c5e-a838-4480230358f9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 00:32:16 crc kubenswrapper[4811]: E1203 00:32:16.251015 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-ceilometer\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/stf-smoketest-smoke1-kj6l5" podUID="95415505-aeec-4c5e-a838-4480230358f9" Dec 03 00:32:17 crc kubenswrapper[4811]: E1203 00:32:17.044695 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-ceilometer\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/tripleomastercentos9/openstack-ceilometer-notification:current-tripleo\\\"\"" pod="service-telemetry/stf-smoketest-smoke1-kj6l5" podUID="95415505-aeec-4c5e-a838-4480230358f9" Dec 03 00:32:25 crc kubenswrapper[4811]: I1203 00:32:25.828692 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-dgvqc_13eccb10-6229-4374-a790-e8677f1438dd/prometheus-webhook-snmp/0.log" Dec 03 00:32:29 crc kubenswrapper[4811]: I1203 00:32:29.155470 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-kj6l5" event={"ID":"95415505-aeec-4c5e-a838-4480230358f9","Type":"ContainerStarted","Data":"86fc2b5d2b7e05d81683a11774a9c55d34c2ff2a84c20e3fa0834f52b023cccf"} Dec 03 00:32:29 crc kubenswrapper[4811]: I1203 00:32:29.179963 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-kj6l5" podStartSLOduration=1.343980553 podStartE2EDuration="42.179946453s" podCreationTimestamp="2025-12-03 00:31:47 +0000 UTC" firstStartedPulling="2025-12-03 00:31:47.900063737 +0000 UTC m=+1548.041893209" lastFinishedPulling="2025-12-03 00:32:28.736029637 +0000 UTC m=+1588.877859109" observedRunningTime="2025-12-03 00:32:29.176744556 +0000 UTC m=+1589.318574028" watchObservedRunningTime="2025-12-03 00:32:29.179946453 +0000 UTC m=+1589.321775925" Dec 03 00:32:38 crc kubenswrapper[4811]: I1203 00:32:38.224847 4811 generic.go:334] "Generic (PLEG): container finished" podID="95415505-aeec-4c5e-a838-4480230358f9" containerID="d91a6b54f4c7ffbfc5295a7f4fddd493165b1913f0b6b7ce4c5423d75c31eb5e" exitCode=0 Dec 03 00:32:38 crc kubenswrapper[4811]: I1203 00:32:38.224929 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-kj6l5" event={"ID":"95415505-aeec-4c5e-a838-4480230358f9","Type":"ContainerDied","Data":"d91a6b54f4c7ffbfc5295a7f4fddd493165b1913f0b6b7ce4c5423d75c31eb5e"} Dec 03 00:32:38 crc kubenswrapper[4811]: I1203 00:32:38.226021 4811 scope.go:117] "RemoveContainer" containerID="d91a6b54f4c7ffbfc5295a7f4fddd493165b1913f0b6b7ce4c5423d75c31eb5e" Dec 03 00:33:01 crc kubenswrapper[4811]: I1203 00:33:01.423805 4811 generic.go:334] "Generic (PLEG): container finished" podID="95415505-aeec-4c5e-a838-4480230358f9" containerID="86fc2b5d2b7e05d81683a11774a9c55d34c2ff2a84c20e3fa0834f52b023cccf" exitCode=0 Dec 03 00:33:01 crc kubenswrapper[4811]: I1203 00:33:01.423888 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-kj6l5" event={"ID":"95415505-aeec-4c5e-a838-4480230358f9","Type":"ContainerDied","Data":"86fc2b5d2b7e05d81683a11774a9c55d34c2ff2a84c20e3fa0834f52b023cccf"} Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.709077 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.815032 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-sensubility-config\") pod \"95415505-aeec-4c5e-a838-4480230358f9\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.815128 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-ceilometer-publisher\") pod \"95415505-aeec-4c5e-a838-4480230358f9\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.815181 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-healthcheck-log\") pod \"95415505-aeec-4c5e-a838-4480230358f9\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.815206 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-collectd-entrypoint-script\") pod \"95415505-aeec-4c5e-a838-4480230358f9\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.815238 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rwjt\" (UniqueName: \"kubernetes.io/projected/95415505-aeec-4c5e-a838-4480230358f9-kube-api-access-9rwjt\") pod \"95415505-aeec-4c5e-a838-4480230358f9\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.815344 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-collectd-config\") pod \"95415505-aeec-4c5e-a838-4480230358f9\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.815377 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-ceilometer-entrypoint-script\") pod \"95415505-aeec-4c5e-a838-4480230358f9\" (UID: \"95415505-aeec-4c5e-a838-4480230358f9\") " Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.823048 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95415505-aeec-4c5e-a838-4480230358f9-kube-api-access-9rwjt" (OuterVolumeSpecName: "kube-api-access-9rwjt") pod "95415505-aeec-4c5e-a838-4480230358f9" (UID: "95415505-aeec-4c5e-a838-4480230358f9"). InnerVolumeSpecName "kube-api-access-9rwjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.838357 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "95415505-aeec-4c5e-a838-4480230358f9" (UID: "95415505-aeec-4c5e-a838-4480230358f9"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.842120 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "95415505-aeec-4c5e-a838-4480230358f9" (UID: "95415505-aeec-4c5e-a838-4480230358f9"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.843371 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "95415505-aeec-4c5e-a838-4480230358f9" (UID: "95415505-aeec-4c5e-a838-4480230358f9"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.844175 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "95415505-aeec-4c5e-a838-4480230358f9" (UID: "95415505-aeec-4c5e-a838-4480230358f9"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.848970 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "95415505-aeec-4c5e-a838-4480230358f9" (UID: "95415505-aeec-4c5e-a838-4480230358f9"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.858735 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "95415505-aeec-4c5e-a838-4480230358f9" (UID: "95415505-aeec-4c5e-a838-4480230358f9"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.917419 4811 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.917461 4811 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.917471 4811 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.917480 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rwjt\" (UniqueName: \"kubernetes.io/projected/95415505-aeec-4c5e-a838-4480230358f9-kube-api-access-9rwjt\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.917489 4811 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.917497 4811 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.917507 4811 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/95415505-aeec-4c5e-a838-4480230358f9-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.940229 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:33:02 crc kubenswrapper[4811]: I1203 00:33:02.940301 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:33:03 crc kubenswrapper[4811]: I1203 00:33:03.438776 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-kj6l5" event={"ID":"95415505-aeec-4c5e-a838-4480230358f9","Type":"ContainerDied","Data":"17829b71396d63fc2363210b564b6300cd52f6a879f0f13edae4ad4a019d2fe3"} Dec 03 00:33:03 crc kubenswrapper[4811]: I1203 00:33:03.439124 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17829b71396d63fc2363210b564b6300cd52f6a879f0f13edae4ad4a019d2fe3" Dec 03 00:33:03 crc kubenswrapper[4811]: I1203 00:33:03.438847 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-kj6l5" Dec 03 00:33:04 crc kubenswrapper[4811]: I1203 00:33:04.818453 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-kj6l5_95415505-aeec-4c5e-a838-4480230358f9/smoketest-collectd/0.log" Dec 03 00:33:05 crc kubenswrapper[4811]: I1203 00:33:05.165879 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-kj6l5_95415505-aeec-4c5e-a838-4480230358f9/smoketest-ceilometer/0.log" Dec 03 00:33:05 crc kubenswrapper[4811]: I1203 00:33:05.511212 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-jxt59_24c610f8-3cf4-4af7-9b6d-38f017ea39a6/default-interconnect/0.log" Dec 03 00:33:05 crc kubenswrapper[4811]: I1203 00:33:05.809134 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6_16c2ef3e-9973-4bf8-a5ca-e8857c7d478b/bridge/2.log" Dec 03 00:33:06 crc kubenswrapper[4811]: I1203 00:33:06.127999 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-9q7h6_16c2ef3e-9973-4bf8-a5ca-e8857c7d478b/sg-core/0.log" Dec 03 00:33:06 crc kubenswrapper[4811]: I1203 00:33:06.456793 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc_f59c9070-a92e-4e0b-8369-072b2fa631b8/bridge/2.log" Dec 03 00:33:06 crc kubenswrapper[4811]: I1203 00:33:06.798764 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-69ff59fdd6-vrnlc_f59c9070-a92e-4e0b-8369-072b2fa631b8/sg-core/0.log" Dec 03 00:33:07 crc kubenswrapper[4811]: I1203 00:33:07.131536 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb_fbf98c09-5d18-49b5-9222-d3e42c6de766/bridge/2.log" Dec 03 00:33:07 crc kubenswrapper[4811]: I1203 00:33:07.417645 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-l7bcb_fbf98c09-5d18-49b5-9222-d3e42c6de766/sg-core/0.log" Dec 03 00:33:07 crc kubenswrapper[4811]: I1203 00:33:07.733233 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww_6a461187-6080-47bb-8d2b-9192a2075d14/bridge/2.log" Dec 03 00:33:08 crc kubenswrapper[4811]: I1203 00:33:08.054953 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-85d8975c64-mzjww_6a461187-6080-47bb-8d2b-9192a2075d14/sg-core/0.log" Dec 03 00:33:08 crc kubenswrapper[4811]: I1203 00:33:08.369128 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g_6f1318e7-8d9a-4324-9d53-4453fdd4d04e/bridge/2.log" Dec 03 00:33:08 crc kubenswrapper[4811]: I1203 00:33:08.688940 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-qbl5g_6f1318e7-8d9a-4324-9d53-4453fdd4d04e/sg-core/0.log" Dec 03 00:33:11 crc kubenswrapper[4811]: I1203 00:33:11.362751 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-58f46bc8bd-x9c5q_576300b3-9d27-41c9-83eb-23cc9560c2d6/operator/0.log" Dec 03 00:33:11 crc kubenswrapper[4811]: I1203 00:33:11.684159 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_e46e9cee-1ba4-434c-a39e-f42b7ed9dc7e/prometheus/0.log" Dec 03 00:33:12 crc kubenswrapper[4811]: I1203 00:33:12.005176 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_de7ccb94-f641-49de-b976-5171b761c8bd/elasticsearch/0.log" Dec 03 00:33:12 crc kubenswrapper[4811]: I1203 00:33:12.304109 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-dgvqc_13eccb10-6229-4374-a790-e8677f1438dd/prometheus-webhook-snmp/0.log" Dec 03 00:33:12 crc kubenswrapper[4811]: I1203 00:33:12.656619 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_b624c13e-3b23-45ef-9f45-93a6dc621cf0/alertmanager/0.log" Dec 03 00:33:27 crc kubenswrapper[4811]: I1203 00:33:27.737214 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-69c7c84c86-hgqhk_c4e434c3-79d8-4e80-8b6d-313e8cdf633e/operator/0.log" Dec 03 00:33:31 crc kubenswrapper[4811]: I1203 00:33:31.051404 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-58f46bc8bd-x9c5q_576300b3-9d27-41c9-83eb-23cc9560c2d6/operator/0.log" Dec 03 00:33:31 crc kubenswrapper[4811]: I1203 00:33:31.358640 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_9d70fe6a-ce2b-40a8-ba60-3daeab8bcb1d/qdr/0.log" Dec 03 00:33:32 crc kubenswrapper[4811]: I1203 00:33:32.939718 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:33:32 crc kubenswrapper[4811]: I1203 00:33:32.939993 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:34:02 crc kubenswrapper[4811]: I1203 00:34:02.940416 4811 patch_prober.go:28] interesting pod/machine-config-daemon-bc7p2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:34:02 crc kubenswrapper[4811]: I1203 00:34:02.941131 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:34:02 crc kubenswrapper[4811]: I1203 00:34:02.941214 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" Dec 03 00:34:02 crc kubenswrapper[4811]: I1203 00:34:02.942417 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f"} pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:34:02 crc kubenswrapper[4811]: I1203 00:34:02.943537 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerName="machine-config-daemon" containerID="cri-o://405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" gracePeriod=600 Dec 03 00:34:03 crc kubenswrapper[4811]: E1203 00:34:03.073127 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:34:03 crc kubenswrapper[4811]: I1203 00:34:03.915202 4811 generic.go:334] "Generic (PLEG): container finished" podID="00463350-e27b-4e14-acee-d79ff4d8eda3" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" exitCode=0 Dec 03 00:34:03 crc kubenswrapper[4811]: I1203 00:34:03.915292 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerDied","Data":"405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f"} Dec 03 00:34:03 crc kubenswrapper[4811]: I1203 00:34:03.915365 4811 scope.go:117] "RemoveContainer" containerID="da6ab7c89c73f34f2e196fadad96f85f0b4d6e41e72b418a78dc01f58cbadf17" Dec 03 00:34:03 crc kubenswrapper[4811]: I1203 00:34:03.918175 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:34:03 crc kubenswrapper[4811]: E1203 00:34:03.920545 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.696751 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-76sgk/must-gather-xz442"] Dec 03 00:34:06 crc kubenswrapper[4811]: E1203 00:34:06.697312 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95415505-aeec-4c5e-a838-4480230358f9" containerName="smoketest-collectd" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.697323 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="95415505-aeec-4c5e-a838-4480230358f9" containerName="smoketest-collectd" Dec 03 00:34:06 crc kubenswrapper[4811]: E1203 00:34:06.697334 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1091034-7173-47b5-a5f5-bf052622dd72" containerName="curl" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.697340 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1091034-7173-47b5-a5f5-bf052622dd72" containerName="curl" Dec 03 00:34:06 crc kubenswrapper[4811]: E1203 00:34:06.697349 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95415505-aeec-4c5e-a838-4480230358f9" containerName="smoketest-ceilometer" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.697356 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="95415505-aeec-4c5e-a838-4480230358f9" containerName="smoketest-ceilometer" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.697468 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="95415505-aeec-4c5e-a838-4480230358f9" containerName="smoketest-collectd" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.697478 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="95415505-aeec-4c5e-a838-4480230358f9" containerName="smoketest-ceilometer" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.697490 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1091034-7173-47b5-a5f5-bf052622dd72" containerName="curl" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.698116 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76sgk/must-gather-xz442" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.713968 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-76sgk"/"default-dockercfg-drxzv" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.714159 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-76sgk"/"openshift-service-ca.crt" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.714394 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-76sgk"/"kube-root-ca.crt" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.716280 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-76sgk/must-gather-xz442"] Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.777597 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fb2f331d-119a-4456-b0ea-8d321f3e71e7-must-gather-output\") pod \"must-gather-xz442\" (UID: \"fb2f331d-119a-4456-b0ea-8d321f3e71e7\") " pod="openshift-must-gather-76sgk/must-gather-xz442" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.777786 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpn8x\" (UniqueName: \"kubernetes.io/projected/fb2f331d-119a-4456-b0ea-8d321f3e71e7-kube-api-access-qpn8x\") pod \"must-gather-xz442\" (UID: \"fb2f331d-119a-4456-b0ea-8d321f3e71e7\") " pod="openshift-must-gather-76sgk/must-gather-xz442" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.878980 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpn8x\" (UniqueName: \"kubernetes.io/projected/fb2f331d-119a-4456-b0ea-8d321f3e71e7-kube-api-access-qpn8x\") pod \"must-gather-xz442\" (UID: \"fb2f331d-119a-4456-b0ea-8d321f3e71e7\") " pod="openshift-must-gather-76sgk/must-gather-xz442" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.879046 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fb2f331d-119a-4456-b0ea-8d321f3e71e7-must-gather-output\") pod \"must-gather-xz442\" (UID: \"fb2f331d-119a-4456-b0ea-8d321f3e71e7\") " pod="openshift-must-gather-76sgk/must-gather-xz442" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.879435 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fb2f331d-119a-4456-b0ea-8d321f3e71e7-must-gather-output\") pod \"must-gather-xz442\" (UID: \"fb2f331d-119a-4456-b0ea-8d321f3e71e7\") " pod="openshift-must-gather-76sgk/must-gather-xz442" Dec 03 00:34:06 crc kubenswrapper[4811]: I1203 00:34:06.899169 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpn8x\" (UniqueName: \"kubernetes.io/projected/fb2f331d-119a-4456-b0ea-8d321f3e71e7-kube-api-access-qpn8x\") pod \"must-gather-xz442\" (UID: \"fb2f331d-119a-4456-b0ea-8d321f3e71e7\") " pod="openshift-must-gather-76sgk/must-gather-xz442" Dec 03 00:34:07 crc kubenswrapper[4811]: I1203 00:34:07.014104 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76sgk/must-gather-xz442" Dec 03 00:34:07 crc kubenswrapper[4811]: I1203 00:34:07.510294 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-76sgk/must-gather-xz442"] Dec 03 00:34:07 crc kubenswrapper[4811]: I1203 00:34:07.949952 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76sgk/must-gather-xz442" event={"ID":"fb2f331d-119a-4456-b0ea-8d321f3e71e7","Type":"ContainerStarted","Data":"76d36a3e78bac8b670c0150a1b5d79e82d5887d31214cf3ddb236cc98838ed5f"} Dec 03 00:34:17 crc kubenswrapper[4811]: I1203 00:34:17.114388 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:34:17 crc kubenswrapper[4811]: E1203 00:34:17.116974 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:34:19 crc kubenswrapper[4811]: I1203 00:34:19.057487 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76sgk/must-gather-xz442" event={"ID":"fb2f331d-119a-4456-b0ea-8d321f3e71e7","Type":"ContainerStarted","Data":"baa34590b10b9b09d355e8e2c035422c1c75a651a17ebb16a3becd6f7106fe0e"} Dec 03 00:34:19 crc kubenswrapper[4811]: I1203 00:34:19.058025 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76sgk/must-gather-xz442" event={"ID":"fb2f331d-119a-4456-b0ea-8d321f3e71e7","Type":"ContainerStarted","Data":"0659dbc27be447bb4941a762c3d1b85ffd1ded756a339adcefcfb4489e9d875e"} Dec 03 00:34:19 crc kubenswrapper[4811]: I1203 00:34:19.082819 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-76sgk/must-gather-xz442" podStartSLOduration=2.482288772 podStartE2EDuration="13.082789759s" podCreationTimestamp="2025-12-03 00:34:06 +0000 UTC" firstStartedPulling="2025-12-03 00:34:07.515029084 +0000 UTC m=+1687.656858556" lastFinishedPulling="2025-12-03 00:34:18.115530041 +0000 UTC m=+1698.257359543" observedRunningTime="2025-12-03 00:34:19.076678352 +0000 UTC m=+1699.218507834" watchObservedRunningTime="2025-12-03 00:34:19.082789759 +0000 UTC m=+1699.224619271" Dec 03 00:34:31 crc kubenswrapper[4811]: I1203 00:34:31.116381 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:34:31 crc kubenswrapper[4811]: E1203 00:34:31.119244 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:34:44 crc kubenswrapper[4811]: I1203 00:34:44.114970 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:34:44 crc kubenswrapper[4811]: E1203 00:34:44.115916 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:34:58 crc kubenswrapper[4811]: I1203 00:34:58.115249 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:34:58 crc kubenswrapper[4811]: E1203 00:34:58.116016 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:35:00 crc kubenswrapper[4811]: I1203 00:35:00.104555 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-s6jkt_72e13ac0-ed91-41a8-8df4-ca88a2838fd3/control-plane-machine-set-operator/0.log" Dec 03 00:35:00 crc kubenswrapper[4811]: I1203 00:35:00.232960 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7bqms_6ee2362a-613c-4927-9525-3d7f87167ab7/kube-rbac-proxy/0.log" Dec 03 00:35:00 crc kubenswrapper[4811]: I1203 00:35:00.290330 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7bqms_6ee2362a-613c-4927-9525-3d7f87167ab7/machine-api-operator/0.log" Dec 03 00:35:10 crc kubenswrapper[4811]: I1203 00:35:10.122499 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:35:10 crc kubenswrapper[4811]: E1203 00:35:10.123191 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:35:12 crc kubenswrapper[4811]: I1203 00:35:12.490400 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-pn6dx_cab3d105-affe-4145-95c3-f9c4a6f766e3/cert-manager-controller/0.log" Dec 03 00:35:12 crc kubenswrapper[4811]: I1203 00:35:12.609313 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-5czlr_7a8903f1-ed1c-494b-8536-7138bd317a66/cert-manager-cainjector/0.log" Dec 03 00:35:12 crc kubenswrapper[4811]: I1203 00:35:12.645077 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-s74g6_32ef9f91-4356-47ed-9963-e18cc68f4771/cert-manager-webhook/0.log" Dec 03 00:35:24 crc kubenswrapper[4811]: I1203 00:35:24.115600 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:35:24 crc kubenswrapper[4811]: E1203 00:35:24.116541 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:35:27 crc kubenswrapper[4811]: I1203 00:35:27.902031 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t_bb2dfcce-0915-4f43-9a77-ad32fb713d1c/util/0.log" Dec 03 00:35:28 crc kubenswrapper[4811]: I1203 00:35:28.078638 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t_bb2dfcce-0915-4f43-9a77-ad32fb713d1c/util/0.log" Dec 03 00:35:28 crc kubenswrapper[4811]: I1203 00:35:28.094647 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t_bb2dfcce-0915-4f43-9a77-ad32fb713d1c/pull/0.log" Dec 03 00:35:28 crc kubenswrapper[4811]: I1203 00:35:28.126219 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t_bb2dfcce-0915-4f43-9a77-ad32fb713d1c/pull/0.log" Dec 03 00:35:28 crc kubenswrapper[4811]: I1203 00:35:28.301166 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t_bb2dfcce-0915-4f43-9a77-ad32fb713d1c/util/0.log" Dec 03 00:35:28 crc kubenswrapper[4811]: I1203 00:35:28.316455 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t_bb2dfcce-0915-4f43-9a77-ad32fb713d1c/pull/0.log" Dec 03 00:35:28 crc kubenswrapper[4811]: I1203 00:35:28.322871 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajfv4t_bb2dfcce-0915-4f43-9a77-ad32fb713d1c/extract/0.log" Dec 03 00:35:28 crc kubenswrapper[4811]: I1203 00:35:28.441919 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p_096bc1af-99b4-4653-8f27-7f030927b726/util/0.log" Dec 03 00:35:28 crc kubenswrapper[4811]: I1203 00:35:28.663528 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p_096bc1af-99b4-4653-8f27-7f030927b726/util/0.log" Dec 03 00:35:28 crc kubenswrapper[4811]: I1203 00:35:28.664483 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p_096bc1af-99b4-4653-8f27-7f030927b726/pull/0.log" Dec 03 00:35:28 crc kubenswrapper[4811]: I1203 00:35:28.670468 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p_096bc1af-99b4-4653-8f27-7f030927b726/pull/0.log" Dec 03 00:35:28 crc kubenswrapper[4811]: I1203 00:35:28.814599 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p_096bc1af-99b4-4653-8f27-7f030927b726/util/0.log" Dec 03 00:35:28 crc kubenswrapper[4811]: I1203 00:35:28.821028 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p_096bc1af-99b4-4653-8f27-7f030927b726/pull/0.log" Dec 03 00:35:28 crc kubenswrapper[4811]: I1203 00:35:28.853058 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107v56p_096bc1af-99b4-4653-8f27-7f030927b726/extract/0.log" Dec 03 00:35:28 crc kubenswrapper[4811]: I1203 00:35:28.989671 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g_40626377-4781-4a9f-b83b-ec64b75bb4e9/util/0.log" Dec 03 00:35:29 crc kubenswrapper[4811]: I1203 00:35:29.125052 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g_40626377-4781-4a9f-b83b-ec64b75bb4e9/pull/0.log" Dec 03 00:35:29 crc kubenswrapper[4811]: I1203 00:35:29.157811 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g_40626377-4781-4a9f-b83b-ec64b75bb4e9/util/0.log" Dec 03 00:35:29 crc kubenswrapper[4811]: I1203 00:35:29.177403 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g_40626377-4781-4a9f-b83b-ec64b75bb4e9/pull/0.log" Dec 03 00:35:29 crc kubenswrapper[4811]: I1203 00:35:29.305729 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g_40626377-4781-4a9f-b83b-ec64b75bb4e9/pull/0.log" Dec 03 00:35:29 crc kubenswrapper[4811]: I1203 00:35:29.324020 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g_40626377-4781-4a9f-b83b-ec64b75bb4e9/extract/0.log" Dec 03 00:35:29 crc kubenswrapper[4811]: I1203 00:35:29.334480 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhf67g_40626377-4781-4a9f-b83b-ec64b75bb4e9/util/0.log" Dec 03 00:35:29 crc kubenswrapper[4811]: I1203 00:35:29.491744 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj_85f78e40-d54c-4871-93c8-8ec0c9bfa5a0/util/0.log" Dec 03 00:35:29 crc kubenswrapper[4811]: I1203 00:35:29.617937 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj_85f78e40-d54c-4871-93c8-8ec0c9bfa5a0/util/0.log" Dec 03 00:35:29 crc kubenswrapper[4811]: I1203 00:35:29.647118 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj_85f78e40-d54c-4871-93c8-8ec0c9bfa5a0/pull/0.log" Dec 03 00:35:29 crc kubenswrapper[4811]: I1203 00:35:29.678273 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj_85f78e40-d54c-4871-93c8-8ec0c9bfa5a0/pull/0.log" Dec 03 00:35:29 crc kubenswrapper[4811]: I1203 00:35:29.824158 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj_85f78e40-d54c-4871-93c8-8ec0c9bfa5a0/util/0.log" Dec 03 00:35:29 crc kubenswrapper[4811]: I1203 00:35:29.854947 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj_85f78e40-d54c-4871-93c8-8ec0c9bfa5a0/pull/0.log" Dec 03 00:35:29 crc kubenswrapper[4811]: I1203 00:35:29.866467 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e89xqj_85f78e40-d54c-4871-93c8-8ec0c9bfa5a0/extract/0.log" Dec 03 00:35:29 crc kubenswrapper[4811]: I1203 00:35:29.965776 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lr4rt_c3f5e37b-17ad-4570-82d9-03b680a5ff7c/extract-utilities/0.log" Dec 03 00:35:30 crc kubenswrapper[4811]: I1203 00:35:30.151536 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lr4rt_c3f5e37b-17ad-4570-82d9-03b680a5ff7c/extract-utilities/0.log" Dec 03 00:35:30 crc kubenswrapper[4811]: I1203 00:35:30.179306 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lr4rt_c3f5e37b-17ad-4570-82d9-03b680a5ff7c/extract-content/0.log" Dec 03 00:35:30 crc kubenswrapper[4811]: I1203 00:35:30.204067 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lr4rt_c3f5e37b-17ad-4570-82d9-03b680a5ff7c/extract-content/0.log" Dec 03 00:35:30 crc kubenswrapper[4811]: I1203 00:35:30.329838 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lr4rt_c3f5e37b-17ad-4570-82d9-03b680a5ff7c/extract-utilities/0.log" Dec 03 00:35:30 crc kubenswrapper[4811]: I1203 00:35:30.396976 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lr4rt_c3f5e37b-17ad-4570-82d9-03b680a5ff7c/extract-content/0.log" Dec 03 00:35:30 crc kubenswrapper[4811]: I1203 00:35:30.527901 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lr4rt_c3f5e37b-17ad-4570-82d9-03b680a5ff7c/registry-server/0.log" Dec 03 00:35:30 crc kubenswrapper[4811]: I1203 00:35:30.537515 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5c6xs_1961ba8c-1b10-4d02-b842-0fe5be50900e/extract-utilities/0.log" Dec 03 00:35:30 crc kubenswrapper[4811]: I1203 00:35:30.712821 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5c6xs_1961ba8c-1b10-4d02-b842-0fe5be50900e/extract-content/0.log" Dec 03 00:35:30 crc kubenswrapper[4811]: I1203 00:35:30.726485 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5c6xs_1961ba8c-1b10-4d02-b842-0fe5be50900e/extract-utilities/0.log" Dec 03 00:35:30 crc kubenswrapper[4811]: I1203 00:35:30.740566 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5c6xs_1961ba8c-1b10-4d02-b842-0fe5be50900e/extract-content/0.log" Dec 03 00:35:30 crc kubenswrapper[4811]: I1203 00:35:30.851098 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5c6xs_1961ba8c-1b10-4d02-b842-0fe5be50900e/extract-utilities/0.log" Dec 03 00:35:30 crc kubenswrapper[4811]: I1203 00:35:30.870056 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5c6xs_1961ba8c-1b10-4d02-b842-0fe5be50900e/extract-content/0.log" Dec 03 00:35:31 crc kubenswrapper[4811]: I1203 00:35:31.101163 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tx7sj_0f19baaf-b832-4364-89bc-99ad74e0aae1/marketplace-operator/0.log" Dec 03 00:35:31 crc kubenswrapper[4811]: I1203 00:35:31.150736 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7ftt_69dda76c-6613-4f85-98cd-2597a053c1cb/extract-utilities/0.log" Dec 03 00:35:31 crc kubenswrapper[4811]: I1203 00:35:31.265517 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5c6xs_1961ba8c-1b10-4d02-b842-0fe5be50900e/registry-server/0.log" Dec 03 00:35:31 crc kubenswrapper[4811]: I1203 00:35:31.334850 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7ftt_69dda76c-6613-4f85-98cd-2597a053c1cb/extract-utilities/0.log" Dec 03 00:35:31 crc kubenswrapper[4811]: I1203 00:35:31.353122 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7ftt_69dda76c-6613-4f85-98cd-2597a053c1cb/extract-content/0.log" Dec 03 00:35:31 crc kubenswrapper[4811]: I1203 00:35:31.362486 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7ftt_69dda76c-6613-4f85-98cd-2597a053c1cb/extract-content/0.log" Dec 03 00:35:31 crc kubenswrapper[4811]: I1203 00:35:31.526413 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7ftt_69dda76c-6613-4f85-98cd-2597a053c1cb/extract-utilities/0.log" Dec 03 00:35:31 crc kubenswrapper[4811]: I1203 00:35:31.526530 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7ftt_69dda76c-6613-4f85-98cd-2597a053c1cb/extract-content/0.log" Dec 03 00:35:31 crc kubenswrapper[4811]: I1203 00:35:31.804894 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n7ftt_69dda76c-6613-4f85-98cd-2597a053c1cb/registry-server/0.log" Dec 03 00:35:38 crc kubenswrapper[4811]: I1203 00:35:38.115907 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:35:38 crc kubenswrapper[4811]: E1203 00:35:38.116802 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:35:43 crc kubenswrapper[4811]: I1203 00:35:43.501189 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-9t8rz_317f3f8c-58bf-420e-952e-8888d2b3fcf3/prometheus-operator/0.log" Dec 03 00:35:43 crc kubenswrapper[4811]: I1203 00:35:43.678594 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685b74c997-hlkk5_ec1a629d-26a7-4438-9134-d4e094ea8e99/prometheus-operator-admission-webhook/0.log" Dec 03 00:35:43 crc kubenswrapper[4811]: I1203 00:35:43.710139 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685b74c997-p7x74_090985bf-75aa-4307-a8b1-2c58e2746bf7/prometheus-operator-admission-webhook/0.log" Dec 03 00:35:43 crc kubenswrapper[4811]: I1203 00:35:43.856239 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-zl9h4_3292536f-4a34-490a-a0e1-15241a0637a8/operator/0.log" Dec 03 00:35:43 crc kubenswrapper[4811]: I1203 00:35:43.921989 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-rlj4w_69797472-4676-4805-be9a-41c75c0275b2/perses-operator/0.log" Dec 03 00:35:51 crc kubenswrapper[4811]: I1203 00:35:51.114715 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:35:51 crc kubenswrapper[4811]: E1203 00:35:51.115442 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:36:05 crc kubenswrapper[4811]: I1203 00:36:05.115227 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:36:05 crc kubenswrapper[4811]: E1203 00:36:05.117819 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:36:16 crc kubenswrapper[4811]: I1203 00:36:16.115116 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:36:16 crc kubenswrapper[4811]: E1203 00:36:16.115973 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:36:28 crc kubenswrapper[4811]: I1203 00:36:28.075770 4811 generic.go:334] "Generic (PLEG): container finished" podID="fb2f331d-119a-4456-b0ea-8d321f3e71e7" containerID="0659dbc27be447bb4941a762c3d1b85ffd1ded756a339adcefcfb4489e9d875e" exitCode=0 Dec 03 00:36:28 crc kubenswrapper[4811]: I1203 00:36:28.075917 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76sgk/must-gather-xz442" event={"ID":"fb2f331d-119a-4456-b0ea-8d321f3e71e7","Type":"ContainerDied","Data":"0659dbc27be447bb4941a762c3d1b85ffd1ded756a339adcefcfb4489e9d875e"} Dec 03 00:36:28 crc kubenswrapper[4811]: I1203 00:36:28.077578 4811 scope.go:117] "RemoveContainer" containerID="0659dbc27be447bb4941a762c3d1b85ffd1ded756a339adcefcfb4489e9d875e" Dec 03 00:36:28 crc kubenswrapper[4811]: I1203 00:36:28.479497 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-76sgk_must-gather-xz442_fb2f331d-119a-4456-b0ea-8d321f3e71e7/gather/0.log" Dec 03 00:36:31 crc kubenswrapper[4811]: I1203 00:36:31.115148 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:36:31 crc kubenswrapper[4811]: E1203 00:36:31.116741 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:36:35 crc kubenswrapper[4811]: I1203 00:36:35.511193 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-76sgk/must-gather-xz442"] Dec 03 00:36:35 crc kubenswrapper[4811]: I1203 00:36:35.512096 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-76sgk/must-gather-xz442" podUID="fb2f331d-119a-4456-b0ea-8d321f3e71e7" containerName="copy" containerID="cri-o://baa34590b10b9b09d355e8e2c035422c1c75a651a17ebb16a3becd6f7106fe0e" gracePeriod=2 Dec 03 00:36:35 crc kubenswrapper[4811]: I1203 00:36:35.517698 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-76sgk/must-gather-xz442"] Dec 03 00:36:35 crc kubenswrapper[4811]: I1203 00:36:35.909793 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-76sgk_must-gather-xz442_fb2f331d-119a-4456-b0ea-8d321f3e71e7/copy/0.log" Dec 03 00:36:35 crc kubenswrapper[4811]: I1203 00:36:35.910536 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76sgk/must-gather-xz442" Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.022840 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpn8x\" (UniqueName: \"kubernetes.io/projected/fb2f331d-119a-4456-b0ea-8d321f3e71e7-kube-api-access-qpn8x\") pod \"fb2f331d-119a-4456-b0ea-8d321f3e71e7\" (UID: \"fb2f331d-119a-4456-b0ea-8d321f3e71e7\") " Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.022907 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fb2f331d-119a-4456-b0ea-8d321f3e71e7-must-gather-output\") pod \"fb2f331d-119a-4456-b0ea-8d321f3e71e7\" (UID: \"fb2f331d-119a-4456-b0ea-8d321f3e71e7\") " Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.029544 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb2f331d-119a-4456-b0ea-8d321f3e71e7-kube-api-access-qpn8x" (OuterVolumeSpecName: "kube-api-access-qpn8x") pod "fb2f331d-119a-4456-b0ea-8d321f3e71e7" (UID: "fb2f331d-119a-4456-b0ea-8d321f3e71e7"). InnerVolumeSpecName "kube-api-access-qpn8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.066771 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb2f331d-119a-4456-b0ea-8d321f3e71e7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "fb2f331d-119a-4456-b0ea-8d321f3e71e7" (UID: "fb2f331d-119a-4456-b0ea-8d321f3e71e7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.124308 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpn8x\" (UniqueName: \"kubernetes.io/projected/fb2f331d-119a-4456-b0ea-8d321f3e71e7-kube-api-access-qpn8x\") on node \"crc\" DevicePath \"\"" Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.124348 4811 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fb2f331d-119a-4456-b0ea-8d321f3e71e7-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.124545 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb2f331d-119a-4456-b0ea-8d321f3e71e7" path="/var/lib/kubelet/pods/fb2f331d-119a-4456-b0ea-8d321f3e71e7/volumes" Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.138813 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-76sgk_must-gather-xz442_fb2f331d-119a-4456-b0ea-8d321f3e71e7/copy/0.log" Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.139537 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76sgk/must-gather-xz442" Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.139538 4811 generic.go:334] "Generic (PLEG): container finished" podID="fb2f331d-119a-4456-b0ea-8d321f3e71e7" containerID="baa34590b10b9b09d355e8e2c035422c1c75a651a17ebb16a3becd6f7106fe0e" exitCode=143 Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.139645 4811 scope.go:117] "RemoveContainer" containerID="baa34590b10b9b09d355e8e2c035422c1c75a651a17ebb16a3becd6f7106fe0e" Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.158796 4811 scope.go:117] "RemoveContainer" containerID="0659dbc27be447bb4941a762c3d1b85ffd1ded756a339adcefcfb4489e9d875e" Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.193456 4811 scope.go:117] "RemoveContainer" containerID="baa34590b10b9b09d355e8e2c035422c1c75a651a17ebb16a3becd6f7106fe0e" Dec 03 00:36:36 crc kubenswrapper[4811]: E1203 00:36:36.194138 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baa34590b10b9b09d355e8e2c035422c1c75a651a17ebb16a3becd6f7106fe0e\": container with ID starting with baa34590b10b9b09d355e8e2c035422c1c75a651a17ebb16a3becd6f7106fe0e not found: ID does not exist" containerID="baa34590b10b9b09d355e8e2c035422c1c75a651a17ebb16a3becd6f7106fe0e" Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.194189 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baa34590b10b9b09d355e8e2c035422c1c75a651a17ebb16a3becd6f7106fe0e"} err="failed to get container status \"baa34590b10b9b09d355e8e2c035422c1c75a651a17ebb16a3becd6f7106fe0e\": rpc error: code = NotFound desc = could not find container \"baa34590b10b9b09d355e8e2c035422c1c75a651a17ebb16a3becd6f7106fe0e\": container with ID starting with baa34590b10b9b09d355e8e2c035422c1c75a651a17ebb16a3becd6f7106fe0e not found: ID does not exist" Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.194222 4811 scope.go:117] "RemoveContainer" containerID="0659dbc27be447bb4941a762c3d1b85ffd1ded756a339adcefcfb4489e9d875e" Dec 03 00:36:36 crc kubenswrapper[4811]: E1203 00:36:36.194739 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0659dbc27be447bb4941a762c3d1b85ffd1ded756a339adcefcfb4489e9d875e\": container with ID starting with 0659dbc27be447bb4941a762c3d1b85ffd1ded756a339adcefcfb4489e9d875e not found: ID does not exist" containerID="0659dbc27be447bb4941a762c3d1b85ffd1ded756a339adcefcfb4489e9d875e" Dec 03 00:36:36 crc kubenswrapper[4811]: I1203 00:36:36.194794 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0659dbc27be447bb4941a762c3d1b85ffd1ded756a339adcefcfb4489e9d875e"} err="failed to get container status \"0659dbc27be447bb4941a762c3d1b85ffd1ded756a339adcefcfb4489e9d875e\": rpc error: code = NotFound desc = could not find container \"0659dbc27be447bb4941a762c3d1b85ffd1ded756a339adcefcfb4489e9d875e\": container with ID starting with 0659dbc27be447bb4941a762c3d1b85ffd1ded756a339adcefcfb4489e9d875e not found: ID does not exist" Dec 03 00:36:36 crc kubenswrapper[4811]: E1203 00:36:36.239390 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb2f331d_119a_4456_b0ea_8d321f3e71e7.slice\": RecentStats: unable to find data in memory cache]" Dec 03 00:36:46 crc kubenswrapper[4811]: I1203 00:36:46.115499 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:36:46 crc kubenswrapper[4811]: E1203 00:36:46.116372 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:37:01 crc kubenswrapper[4811]: I1203 00:37:01.115712 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:37:01 crc kubenswrapper[4811]: E1203 00:37:01.116163 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:37:14 crc kubenswrapper[4811]: I1203 00:37:14.114870 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:37:14 crc kubenswrapper[4811]: E1203 00:37:14.115835 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:37:28 crc kubenswrapper[4811]: I1203 00:37:28.115580 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:37:28 crc kubenswrapper[4811]: E1203 00:37:28.116376 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:37:40 crc kubenswrapper[4811]: I1203 00:37:40.118959 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:37:40 crc kubenswrapper[4811]: E1203 00:37:40.119809 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:37:55 crc kubenswrapper[4811]: I1203 00:37:55.116449 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:37:55 crc kubenswrapper[4811]: E1203 00:37:55.117635 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:38:06 crc kubenswrapper[4811]: I1203 00:38:06.114653 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:38:06 crc kubenswrapper[4811]: E1203 00:38:06.115584 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.017711 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-96twm"] Dec 03 00:38:12 crc kubenswrapper[4811]: E1203 00:38:12.018746 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb2f331d-119a-4456-b0ea-8d321f3e71e7" containerName="copy" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.018767 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb2f331d-119a-4456-b0ea-8d321f3e71e7" containerName="copy" Dec 03 00:38:12 crc kubenswrapper[4811]: E1203 00:38:12.018799 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb2f331d-119a-4456-b0ea-8d321f3e71e7" containerName="gather" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.018812 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb2f331d-119a-4456-b0ea-8d321f3e71e7" containerName="gather" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.019034 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb2f331d-119a-4456-b0ea-8d321f3e71e7" containerName="copy" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.019074 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb2f331d-119a-4456-b0ea-8d321f3e71e7" containerName="gather" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.020587 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.042140 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-96twm"] Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.182870 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2f5309b-6ecd-4498-9987-cd963cbdc01f-utilities\") pod \"redhat-operators-96twm\" (UID: \"a2f5309b-6ecd-4498-9987-cd963cbdc01f\") " pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.183106 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp2mj\" (UniqueName: \"kubernetes.io/projected/a2f5309b-6ecd-4498-9987-cd963cbdc01f-kube-api-access-lp2mj\") pod \"redhat-operators-96twm\" (UID: \"a2f5309b-6ecd-4498-9987-cd963cbdc01f\") " pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.183336 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2f5309b-6ecd-4498-9987-cd963cbdc01f-catalog-content\") pod \"redhat-operators-96twm\" (UID: \"a2f5309b-6ecd-4498-9987-cd963cbdc01f\") " pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.285203 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2f5309b-6ecd-4498-9987-cd963cbdc01f-catalog-content\") pod \"redhat-operators-96twm\" (UID: \"a2f5309b-6ecd-4498-9987-cd963cbdc01f\") " pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.285348 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2f5309b-6ecd-4498-9987-cd963cbdc01f-utilities\") pod \"redhat-operators-96twm\" (UID: \"a2f5309b-6ecd-4498-9987-cd963cbdc01f\") " pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.285473 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp2mj\" (UniqueName: \"kubernetes.io/projected/a2f5309b-6ecd-4498-9987-cd963cbdc01f-kube-api-access-lp2mj\") pod \"redhat-operators-96twm\" (UID: \"a2f5309b-6ecd-4498-9987-cd963cbdc01f\") " pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.285952 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2f5309b-6ecd-4498-9987-cd963cbdc01f-catalog-content\") pod \"redhat-operators-96twm\" (UID: \"a2f5309b-6ecd-4498-9987-cd963cbdc01f\") " pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.286774 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2f5309b-6ecd-4498-9987-cd963cbdc01f-utilities\") pod \"redhat-operators-96twm\" (UID: \"a2f5309b-6ecd-4498-9987-cd963cbdc01f\") " pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.317718 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp2mj\" (UniqueName: \"kubernetes.io/projected/a2f5309b-6ecd-4498-9987-cd963cbdc01f-kube-api-access-lp2mj\") pod \"redhat-operators-96twm\" (UID: \"a2f5309b-6ecd-4498-9987-cd963cbdc01f\") " pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.385241 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.626237 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-96twm"] Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.943515 4811 generic.go:334] "Generic (PLEG): container finished" podID="a2f5309b-6ecd-4498-9987-cd963cbdc01f" containerID="a7d914c67a8c9ef9a39d127aee1727f0a5edf41628fa45d54794b1f6ff18027d" exitCode=0 Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.944188 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96twm" event={"ID":"a2f5309b-6ecd-4498-9987-cd963cbdc01f","Type":"ContainerDied","Data":"a7d914c67a8c9ef9a39d127aee1727f0a5edf41628fa45d54794b1f6ff18027d"} Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.944225 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96twm" event={"ID":"a2f5309b-6ecd-4498-9987-cd963cbdc01f","Type":"ContainerStarted","Data":"8d3a2940843ad0a8359a90f65cdb95a41a95d8f481b02fef34cf9ee4d3fea774"} Dec 03 00:38:12 crc kubenswrapper[4811]: I1203 00:38:12.946416 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 00:38:13 crc kubenswrapper[4811]: I1203 00:38:13.954727 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96twm" event={"ID":"a2f5309b-6ecd-4498-9987-cd963cbdc01f","Type":"ContainerStarted","Data":"8e476b86b310319da21a56147064fc2b0df4d55a4b54bdbaff571969cf69dab2"} Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.207630 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mtb2b"] Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.209022 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.226443 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mtb2b"] Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.316817 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd791442-5b1f-4f3d-89e1-111db4103ad4-utilities\") pod \"certified-operators-mtb2b\" (UID: \"dd791442-5b1f-4f3d-89e1-111db4103ad4\") " pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.316889 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjk9q\" (UniqueName: \"kubernetes.io/projected/dd791442-5b1f-4f3d-89e1-111db4103ad4-kube-api-access-mjk9q\") pod \"certified-operators-mtb2b\" (UID: \"dd791442-5b1f-4f3d-89e1-111db4103ad4\") " pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.316982 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd791442-5b1f-4f3d-89e1-111db4103ad4-catalog-content\") pod \"certified-operators-mtb2b\" (UID: \"dd791442-5b1f-4f3d-89e1-111db4103ad4\") " pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.418553 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd791442-5b1f-4f3d-89e1-111db4103ad4-catalog-content\") pod \"certified-operators-mtb2b\" (UID: \"dd791442-5b1f-4f3d-89e1-111db4103ad4\") " pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.418624 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd791442-5b1f-4f3d-89e1-111db4103ad4-utilities\") pod \"certified-operators-mtb2b\" (UID: \"dd791442-5b1f-4f3d-89e1-111db4103ad4\") " pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.418650 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjk9q\" (UniqueName: \"kubernetes.io/projected/dd791442-5b1f-4f3d-89e1-111db4103ad4-kube-api-access-mjk9q\") pod \"certified-operators-mtb2b\" (UID: \"dd791442-5b1f-4f3d-89e1-111db4103ad4\") " pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.419476 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd791442-5b1f-4f3d-89e1-111db4103ad4-catalog-content\") pod \"certified-operators-mtb2b\" (UID: \"dd791442-5b1f-4f3d-89e1-111db4103ad4\") " pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.419754 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd791442-5b1f-4f3d-89e1-111db4103ad4-utilities\") pod \"certified-operators-mtb2b\" (UID: \"dd791442-5b1f-4f3d-89e1-111db4103ad4\") " pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.442585 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjk9q\" (UniqueName: \"kubernetes.io/projected/dd791442-5b1f-4f3d-89e1-111db4103ad4-kube-api-access-mjk9q\") pod \"certified-operators-mtb2b\" (UID: \"dd791442-5b1f-4f3d-89e1-111db4103ad4\") " pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.578293 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.797448 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mtb2b"] Dec 03 00:38:14 crc kubenswrapper[4811]: W1203 00:38:14.807651 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd791442_5b1f_4f3d_89e1_111db4103ad4.slice/crio-9f2a1faebb96e8f41937e09e76ecf85a048a2d54540a160e4fa1f258c0dd2442 WatchSource:0}: Error finding container 9f2a1faebb96e8f41937e09e76ecf85a048a2d54540a160e4fa1f258c0dd2442: Status 404 returned error can't find the container with id 9f2a1faebb96e8f41937e09e76ecf85a048a2d54540a160e4fa1f258c0dd2442 Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.962201 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtb2b" event={"ID":"dd791442-5b1f-4f3d-89e1-111db4103ad4","Type":"ContainerStarted","Data":"7e4127f51774675efcaf441265da2aacd3539f482a2ac7951737161eb7f6ab0a"} Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.962296 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtb2b" event={"ID":"dd791442-5b1f-4f3d-89e1-111db4103ad4","Type":"ContainerStarted","Data":"9f2a1faebb96e8f41937e09e76ecf85a048a2d54540a160e4fa1f258c0dd2442"} Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.974500 4811 generic.go:334] "Generic (PLEG): container finished" podID="a2f5309b-6ecd-4498-9987-cd963cbdc01f" containerID="8e476b86b310319da21a56147064fc2b0df4d55a4b54bdbaff571969cf69dab2" exitCode=0 Dec 03 00:38:14 crc kubenswrapper[4811]: I1203 00:38:14.974553 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96twm" event={"ID":"a2f5309b-6ecd-4498-9987-cd963cbdc01f","Type":"ContainerDied","Data":"8e476b86b310319da21a56147064fc2b0df4d55a4b54bdbaff571969cf69dab2"} Dec 03 00:38:15 crc kubenswrapper[4811]: I1203 00:38:15.986302 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96twm" event={"ID":"a2f5309b-6ecd-4498-9987-cd963cbdc01f","Type":"ContainerStarted","Data":"29dc002475a424ef0169790f957647f1e0daddf2b96b8734b3d6cfa51f24b8b1"} Dec 03 00:38:15 crc kubenswrapper[4811]: I1203 00:38:15.988173 4811 generic.go:334] "Generic (PLEG): container finished" podID="dd791442-5b1f-4f3d-89e1-111db4103ad4" containerID="7e4127f51774675efcaf441265da2aacd3539f482a2ac7951737161eb7f6ab0a" exitCode=0 Dec 03 00:38:15 crc kubenswrapper[4811]: I1203 00:38:15.988228 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtb2b" event={"ID":"dd791442-5b1f-4f3d-89e1-111db4103ad4","Type":"ContainerDied","Data":"7e4127f51774675efcaf441265da2aacd3539f482a2ac7951737161eb7f6ab0a"} Dec 03 00:38:16 crc kubenswrapper[4811]: I1203 00:38:16.010884 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-96twm" podStartSLOduration=2.205815452 podStartE2EDuration="5.010856955s" podCreationTimestamp="2025-12-03 00:38:11 +0000 UTC" firstStartedPulling="2025-12-03 00:38:12.946059877 +0000 UTC m=+1933.087889349" lastFinishedPulling="2025-12-03 00:38:15.75110134 +0000 UTC m=+1935.892930852" observedRunningTime="2025-12-03 00:38:16.004746447 +0000 UTC m=+1936.146575949" watchObservedRunningTime="2025-12-03 00:38:16.010856955 +0000 UTC m=+1936.152686437" Dec 03 00:38:17 crc kubenswrapper[4811]: I1203 00:38:17.004454 4811 generic.go:334] "Generic (PLEG): container finished" podID="dd791442-5b1f-4f3d-89e1-111db4103ad4" containerID="2d1dda67b644e280a711beda727bf2b3b5ea44b5f3a93cd988668d67cd0ccbb2" exitCode=0 Dec 03 00:38:17 crc kubenswrapper[4811]: I1203 00:38:17.005219 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtb2b" event={"ID":"dd791442-5b1f-4f3d-89e1-111db4103ad4","Type":"ContainerDied","Data":"2d1dda67b644e280a711beda727bf2b3b5ea44b5f3a93cd988668d67cd0ccbb2"} Dec 03 00:38:18 crc kubenswrapper[4811]: I1203 00:38:18.015438 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtb2b" event={"ID":"dd791442-5b1f-4f3d-89e1-111db4103ad4","Type":"ContainerStarted","Data":"d2cdbe25446ebcc069118bcff060175dcbca4f40a31f5c90c1bda02191789c28"} Dec 03 00:38:18 crc kubenswrapper[4811]: I1203 00:38:18.048436 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mtb2b" podStartSLOduration=2.58071328 podStartE2EDuration="4.048411608s" podCreationTimestamp="2025-12-03 00:38:14 +0000 UTC" firstStartedPulling="2025-12-03 00:38:15.989670905 +0000 UTC m=+1936.131500387" lastFinishedPulling="2025-12-03 00:38:17.457369213 +0000 UTC m=+1937.599198715" observedRunningTime="2025-12-03 00:38:18.037417293 +0000 UTC m=+1938.179246765" watchObservedRunningTime="2025-12-03 00:38:18.048411608 +0000 UTC m=+1938.190241110" Dec 03 00:38:18 crc kubenswrapper[4811]: I1203 00:38:18.115752 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:38:18 crc kubenswrapper[4811]: E1203 00:38:18.116153 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:38:22 crc kubenswrapper[4811]: I1203 00:38:22.385991 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:22 crc kubenswrapper[4811]: I1203 00:38:22.386700 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:22 crc kubenswrapper[4811]: I1203 00:38:22.452559 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:23 crc kubenswrapper[4811]: I1203 00:38:23.111030 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:23 crc kubenswrapper[4811]: I1203 00:38:23.163383 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-96twm"] Dec 03 00:38:24 crc kubenswrapper[4811]: I1203 00:38:24.578665 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:24 crc kubenswrapper[4811]: I1203 00:38:24.579058 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:24 crc kubenswrapper[4811]: I1203 00:38:24.646002 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:25 crc kubenswrapper[4811]: I1203 00:38:25.078404 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-96twm" podUID="a2f5309b-6ecd-4498-9987-cd963cbdc01f" containerName="registry-server" containerID="cri-o://29dc002475a424ef0169790f957647f1e0daddf2b96b8734b3d6cfa51f24b8b1" gracePeriod=2 Dec 03 00:38:25 crc kubenswrapper[4811]: I1203 00:38:25.124935 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:26 crc kubenswrapper[4811]: I1203 00:38:26.099072 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mtb2b"] Dec 03 00:38:27 crc kubenswrapper[4811]: I1203 00:38:27.100417 4811 generic.go:334] "Generic (PLEG): container finished" podID="a2f5309b-6ecd-4498-9987-cd963cbdc01f" containerID="29dc002475a424ef0169790f957647f1e0daddf2b96b8734b3d6cfa51f24b8b1" exitCode=0 Dec 03 00:38:27 crc kubenswrapper[4811]: I1203 00:38:27.100520 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96twm" event={"ID":"a2f5309b-6ecd-4498-9987-cd963cbdc01f","Type":"ContainerDied","Data":"29dc002475a424ef0169790f957647f1e0daddf2b96b8734b3d6cfa51f24b8b1"} Dec 03 00:38:27 crc kubenswrapper[4811]: I1203 00:38:27.101108 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mtb2b" podUID="dd791442-5b1f-4f3d-89e1-111db4103ad4" containerName="registry-server" containerID="cri-o://d2cdbe25446ebcc069118bcff060175dcbca4f40a31f5c90c1bda02191789c28" gracePeriod=2 Dec 03 00:38:27 crc kubenswrapper[4811]: I1203 00:38:27.446810 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:27 crc kubenswrapper[4811]: I1203 00:38:27.545549 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2f5309b-6ecd-4498-9987-cd963cbdc01f-catalog-content\") pod \"a2f5309b-6ecd-4498-9987-cd963cbdc01f\" (UID: \"a2f5309b-6ecd-4498-9987-cd963cbdc01f\") " Dec 03 00:38:27 crc kubenswrapper[4811]: I1203 00:38:27.545744 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp2mj\" (UniqueName: \"kubernetes.io/projected/a2f5309b-6ecd-4498-9987-cd963cbdc01f-kube-api-access-lp2mj\") pod \"a2f5309b-6ecd-4498-9987-cd963cbdc01f\" (UID: \"a2f5309b-6ecd-4498-9987-cd963cbdc01f\") " Dec 03 00:38:27 crc kubenswrapper[4811]: I1203 00:38:27.545954 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2f5309b-6ecd-4498-9987-cd963cbdc01f-utilities\") pod \"a2f5309b-6ecd-4498-9987-cd963cbdc01f\" (UID: \"a2f5309b-6ecd-4498-9987-cd963cbdc01f\") " Dec 03 00:38:27 crc kubenswrapper[4811]: I1203 00:38:27.547600 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f5309b-6ecd-4498-9987-cd963cbdc01f-utilities" (OuterVolumeSpecName: "utilities") pod "a2f5309b-6ecd-4498-9987-cd963cbdc01f" (UID: "a2f5309b-6ecd-4498-9987-cd963cbdc01f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:38:27 crc kubenswrapper[4811]: I1203 00:38:27.555256 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f5309b-6ecd-4498-9987-cd963cbdc01f-kube-api-access-lp2mj" (OuterVolumeSpecName: "kube-api-access-lp2mj") pod "a2f5309b-6ecd-4498-9987-cd963cbdc01f" (UID: "a2f5309b-6ecd-4498-9987-cd963cbdc01f"). InnerVolumeSpecName "kube-api-access-lp2mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:38:27 crc kubenswrapper[4811]: I1203 00:38:27.647799 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2f5309b-6ecd-4498-9987-cd963cbdc01f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:27 crc kubenswrapper[4811]: I1203 00:38:27.647854 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp2mj\" (UniqueName: \"kubernetes.io/projected/a2f5309b-6ecd-4498-9987-cd963cbdc01f-kube-api-access-lp2mj\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:27 crc kubenswrapper[4811]: I1203 00:38:27.669726 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f5309b-6ecd-4498-9987-cd963cbdc01f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2f5309b-6ecd-4498-9987-cd963cbdc01f" (UID: "a2f5309b-6ecd-4498-9987-cd963cbdc01f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:38:27 crc kubenswrapper[4811]: I1203 00:38:27.749204 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2f5309b-6ecd-4498-9987-cd963cbdc01f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.040041 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.117822 4811 generic.go:334] "Generic (PLEG): container finished" podID="dd791442-5b1f-4f3d-89e1-111db4103ad4" containerID="d2cdbe25446ebcc069118bcff060175dcbca4f40a31f5c90c1bda02191789c28" exitCode=0 Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.117971 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtb2b" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.122015 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96twm" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.156243 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtb2b" event={"ID":"dd791442-5b1f-4f3d-89e1-111db4103ad4","Type":"ContainerDied","Data":"d2cdbe25446ebcc069118bcff060175dcbca4f40a31f5c90c1bda02191789c28"} Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.156326 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtb2b" event={"ID":"dd791442-5b1f-4f3d-89e1-111db4103ad4","Type":"ContainerDied","Data":"9f2a1faebb96e8f41937e09e76ecf85a048a2d54540a160e4fa1f258c0dd2442"} Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.156343 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96twm" event={"ID":"a2f5309b-6ecd-4498-9987-cd963cbdc01f","Type":"ContainerDied","Data":"8d3a2940843ad0a8359a90f65cdb95a41a95d8f481b02fef34cf9ee4d3fea774"} Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.156378 4811 scope.go:117] "RemoveContainer" containerID="d2cdbe25446ebcc069118bcff060175dcbca4f40a31f5c90c1bda02191789c28" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.157636 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd791442-5b1f-4f3d-89e1-111db4103ad4-utilities\") pod \"dd791442-5b1f-4f3d-89e1-111db4103ad4\" (UID: \"dd791442-5b1f-4f3d-89e1-111db4103ad4\") " Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.157950 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd791442-5b1f-4f3d-89e1-111db4103ad4-catalog-content\") pod \"dd791442-5b1f-4f3d-89e1-111db4103ad4\" (UID: \"dd791442-5b1f-4f3d-89e1-111db4103ad4\") " Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.158039 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjk9q\" (UniqueName: \"kubernetes.io/projected/dd791442-5b1f-4f3d-89e1-111db4103ad4-kube-api-access-mjk9q\") pod \"dd791442-5b1f-4f3d-89e1-111db4103ad4\" (UID: \"dd791442-5b1f-4f3d-89e1-111db4103ad4\") " Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.163898 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd791442-5b1f-4f3d-89e1-111db4103ad4-kube-api-access-mjk9q" (OuterVolumeSpecName: "kube-api-access-mjk9q") pod "dd791442-5b1f-4f3d-89e1-111db4103ad4" (UID: "dd791442-5b1f-4f3d-89e1-111db4103ad4"). InnerVolumeSpecName "kube-api-access-mjk9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.164474 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd791442-5b1f-4f3d-89e1-111db4103ad4-utilities" (OuterVolumeSpecName: "utilities") pod "dd791442-5b1f-4f3d-89e1-111db4103ad4" (UID: "dd791442-5b1f-4f3d-89e1-111db4103ad4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.196345 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-96twm"] Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.197374 4811 scope.go:117] "RemoveContainer" containerID="2d1dda67b644e280a711beda727bf2b3b5ea44b5f3a93cd988668d67cd0ccbb2" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.204908 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-96twm"] Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.216256 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd791442-5b1f-4f3d-89e1-111db4103ad4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd791442-5b1f-4f3d-89e1-111db4103ad4" (UID: "dd791442-5b1f-4f3d-89e1-111db4103ad4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.228746 4811 scope.go:117] "RemoveContainer" containerID="7e4127f51774675efcaf441265da2aacd3539f482a2ac7951737161eb7f6ab0a" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.251622 4811 scope.go:117] "RemoveContainer" containerID="d2cdbe25446ebcc069118bcff060175dcbca4f40a31f5c90c1bda02191789c28" Dec 03 00:38:28 crc kubenswrapper[4811]: E1203 00:38:28.252040 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2cdbe25446ebcc069118bcff060175dcbca4f40a31f5c90c1bda02191789c28\": container with ID starting with d2cdbe25446ebcc069118bcff060175dcbca4f40a31f5c90c1bda02191789c28 not found: ID does not exist" containerID="d2cdbe25446ebcc069118bcff060175dcbca4f40a31f5c90c1bda02191789c28" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.252094 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2cdbe25446ebcc069118bcff060175dcbca4f40a31f5c90c1bda02191789c28"} err="failed to get container status \"d2cdbe25446ebcc069118bcff060175dcbca4f40a31f5c90c1bda02191789c28\": rpc error: code = NotFound desc = could not find container \"d2cdbe25446ebcc069118bcff060175dcbca4f40a31f5c90c1bda02191789c28\": container with ID starting with d2cdbe25446ebcc069118bcff060175dcbca4f40a31f5c90c1bda02191789c28 not found: ID does not exist" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.252120 4811 scope.go:117] "RemoveContainer" containerID="2d1dda67b644e280a711beda727bf2b3b5ea44b5f3a93cd988668d67cd0ccbb2" Dec 03 00:38:28 crc kubenswrapper[4811]: E1203 00:38:28.252735 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d1dda67b644e280a711beda727bf2b3b5ea44b5f3a93cd988668d67cd0ccbb2\": container with ID starting with 2d1dda67b644e280a711beda727bf2b3b5ea44b5f3a93cd988668d67cd0ccbb2 not found: ID does not exist" containerID="2d1dda67b644e280a711beda727bf2b3b5ea44b5f3a93cd988668d67cd0ccbb2" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.252774 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d1dda67b644e280a711beda727bf2b3b5ea44b5f3a93cd988668d67cd0ccbb2"} err="failed to get container status \"2d1dda67b644e280a711beda727bf2b3b5ea44b5f3a93cd988668d67cd0ccbb2\": rpc error: code = NotFound desc = could not find container \"2d1dda67b644e280a711beda727bf2b3b5ea44b5f3a93cd988668d67cd0ccbb2\": container with ID starting with 2d1dda67b644e280a711beda727bf2b3b5ea44b5f3a93cd988668d67cd0ccbb2 not found: ID does not exist" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.252801 4811 scope.go:117] "RemoveContainer" containerID="7e4127f51774675efcaf441265da2aacd3539f482a2ac7951737161eb7f6ab0a" Dec 03 00:38:28 crc kubenswrapper[4811]: E1203 00:38:28.253192 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e4127f51774675efcaf441265da2aacd3539f482a2ac7951737161eb7f6ab0a\": container with ID starting with 7e4127f51774675efcaf441265da2aacd3539f482a2ac7951737161eb7f6ab0a not found: ID does not exist" containerID="7e4127f51774675efcaf441265da2aacd3539f482a2ac7951737161eb7f6ab0a" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.253222 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e4127f51774675efcaf441265da2aacd3539f482a2ac7951737161eb7f6ab0a"} err="failed to get container status \"7e4127f51774675efcaf441265da2aacd3539f482a2ac7951737161eb7f6ab0a\": rpc error: code = NotFound desc = could not find container \"7e4127f51774675efcaf441265da2aacd3539f482a2ac7951737161eb7f6ab0a\": container with ID starting with 7e4127f51774675efcaf441265da2aacd3539f482a2ac7951737161eb7f6ab0a not found: ID does not exist" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.253242 4811 scope.go:117] "RemoveContainer" containerID="29dc002475a424ef0169790f957647f1e0daddf2b96b8734b3d6cfa51f24b8b1" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.263097 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd791442-5b1f-4f3d-89e1-111db4103ad4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.263132 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjk9q\" (UniqueName: \"kubernetes.io/projected/dd791442-5b1f-4f3d-89e1-111db4103ad4-kube-api-access-mjk9q\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.263145 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd791442-5b1f-4f3d-89e1-111db4103ad4-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.284662 4811 scope.go:117] "RemoveContainer" containerID="8e476b86b310319da21a56147064fc2b0df4d55a4b54bdbaff571969cf69dab2" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.312809 4811 scope.go:117] "RemoveContainer" containerID="a7d914c67a8c9ef9a39d127aee1727f0a5edf41628fa45d54794b1f6ff18027d" Dec 03 00:38:28 crc kubenswrapper[4811]: E1203 00:38:28.326494 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2f5309b_6ecd_4498_9987_cd963cbdc01f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2f5309b_6ecd_4498_9987_cd963cbdc01f.slice/crio-8d3a2940843ad0a8359a90f65cdb95a41a95d8f481b02fef34cf9ee4d3fea774\": RecentStats: unable to find data in memory cache]" Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.457187 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mtb2b"] Dec 03 00:38:28 crc kubenswrapper[4811]: I1203 00:38:28.469244 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mtb2b"] Dec 03 00:38:30 crc kubenswrapper[4811]: I1203 00:38:30.124578 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:38:30 crc kubenswrapper[4811]: E1203 00:38:30.125380 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:38:30 crc kubenswrapper[4811]: I1203 00:38:30.130648 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2f5309b-6ecd-4498-9987-cd963cbdc01f" path="/var/lib/kubelet/pods/a2f5309b-6ecd-4498-9987-cd963cbdc01f/volumes" Dec 03 00:38:30 crc kubenswrapper[4811]: I1203 00:38:30.132167 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd791442-5b1f-4f3d-89e1-111db4103ad4" path="/var/lib/kubelet/pods/dd791442-5b1f-4f3d-89e1-111db4103ad4/volumes" Dec 03 00:38:42 crc kubenswrapper[4811]: I1203 00:38:42.115614 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:38:42 crc kubenswrapper[4811]: E1203 00:38:42.117408 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:38:56 crc kubenswrapper[4811]: I1203 00:38:56.115644 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:38:56 crc kubenswrapper[4811]: E1203 00:38:56.116275 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bc7p2_openshift-machine-config-operator(00463350-e27b-4e14-acee-d79ff4d8eda3)\"" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" podUID="00463350-e27b-4e14-acee-d79ff4d8eda3" Dec 03 00:39:09 crc kubenswrapper[4811]: I1203 00:39:09.115652 4811 scope.go:117] "RemoveContainer" containerID="405a20b6b8cc4d3e98644ccafc777e250935d70a94fd4e4a375faaa317c0839f" Dec 03 00:39:09 crc kubenswrapper[4811]: I1203 00:39:09.500493 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bc7p2" event={"ID":"00463350-e27b-4e14-acee-d79ff4d8eda3","Type":"ContainerStarted","Data":"41d05146647d7e2285564f8586ed82515a18768258b2d6eeceb640c1d943be95"}